Configuration Reference 0.9 documentation

Contents

OpenStack Configuration Reference

Abstract

This document is for system administrators who want to look up configuration options. It contains lists of configuration options available with OpenStack and uses auto-generation to generate options and the descriptions from the code for each project. It includes sample configuration files.

Contents

OpenStack configuration overview

Conventions

The OpenStack documentation uses several typesetting conventions.

Notices

Notices take these forms:

Note

A comment with additional information that explains a part of the text.

Important

Something you must be aware of before proceeding.

Tip

An extra but helpful piece of practical advice.

Caution

Helpful information that prevents the user from making mistakes.

Warning

Critical information about the risk of data loss or security issues.

Command prompts
$ command

Any user, including the root user, can run commands that are prefixed with the $ prompt.

# command

The root user must run commands that are prefixed with the # prompt. You can also prefix these commands with the sudo command, if available, to run them.

Configuration file format

OpenStack uses the INI file format for configuration files. An INI file is a simple text file that specifies options as key=value pairs, grouped into sections. The DEFAULT section contains most of the configuration options. Lines starting with a hash sign (#) are comment lines. For example:

[DEFAULT]
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
debug = true

[database]
# The SQLAlchemy connection string used to connect to the
# database (string value)
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

Options can have different types for values. The comments in the sample config files always mention these and the tables mention the Opt value as first item like (BoolOpt) Toggle.... The following types are used by OpenStack:

boolean value (BoolOpt)

Enables or disables an option. The allowed values are true and false.

# Enable the experimental use of database reconnect on
# connection lost (boolean value)
use_db_reconnect = false
floating point value (FloatOpt)

A floating point number like 0.25 or 1000.

# Sleep time in seconds for polling an ongoing async task
# (floating point value)
task_poll_interval = 0.5
integer value (IntOpt)

An integer number is a number without fractional components, like 0 or 42.

# The port which the OpenStack Compute service listens on.
# (integer value)
compute_port = 8774
IP address (IPOpt)

An IPv4 or IPv6 address.

# Address to bind the server. Useful when selecting a particular network
# interface. (ip address value)
bind_host = 0.0.0.0
key-value pairs (DictOpt)

A key-value pairs, also known as a dictionary. The key value pairs are separated by commas and a colon is used to separate key and value. Example: key1:value1,key2:value2.

# Parameter for l2_l3 workflow setup. (dict value)
l2_l3_setup_params = data_ip_address:192.168.200.99, \
   data_ip_mask:255.255.255.0,data_port:1,gateway:192.168.200.1,ha_port:2
list value (ListOpt)

Represents values of other types, separated by commas. As an example, the following sets allowed_rpc_exception_modules to a list containing the four elements oslo.messaging.exceptions, nova.exception, cinder.exception, and exceptions:

# Modules of exceptions that are permitted to be recreated
# upon receiving exception data from an rpc call. (list value)
allowed_rpc_exception_modules = oslo.messaging.exceptions,nova.exception
multi valued (MultiStrOpt)

A multi-valued option is a string value and can be given more than once, all values will be used.

# Driver or drivers to handle sending notifications. (multi valued)
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier
port value (PortOpt)

A TCP/IP port number. Ports can range from 1 to 65535.

# Port to which the UDP socket is bound. (port value)
# Minimum value: 1
# Maximum value: 65535
udp_port = 4952
string value (StrOpt)

Strings can be optionally enclosed with single or double quotes.

# Enables or disables publication of error events. (boolean value)
publish_errors = false

# The format for an instance that is passed with the log message.
# (string value)
instance_format = "[instance: %(uuid)s] "
Sections

Configuration options are grouped by section. Most configuration files support at least the following sections:

[DEFAULT]
Contains most configuration options. If the documentation for a configuration option does not specify its section, assume that it appears in this section.
[database]
Configuration options for the database that stores the state of the OpenStack service.
Substitution

The configuration file supports variable substitution. After you set a configuration option, it can be referenced in later configuration values when you precede it with a $, like $OPTION.

The following example uses the values of rabbit_host and rabbit_port to define the value of the rabbit_hosts option, in this case as controller:5672.

# The RabbitMQ broker address where a single node is used.
# (string value)
rabbit_host = controller

# The RabbitMQ broker port where a single node is used.
# (integer value)
rabbit_port = 5672

# RabbitMQ HA cluster host:port pairs. (list value)
rabbit_hosts = $rabbit_host:$rabbit_port

To avoid substitution, use $$, it is replaced by a single $. For example, if your LDAP DNS password is $xkj432, specify it, as follows:

ldap_dns_password = $$xkj432

The code uses the Python string.Template.safe_substitute() method to implement variable substitution. For more details on how variable substitution is resolved, see http://docs.python.org/2/library/string.html#template-strings and PEP 292.

Whitespace

To include whitespace in a configuration value, use a quoted string. For example:

ldap_dns_password='a password with spaces'
Define an alternate location for a config file

Most services and the *-manage command-line clients load the configuration file. To define an alternate location for the configuration file, pass the --config-file CONFIG_FILE parameter when you start a service or call a *-manage command.

Changing config at runtime

OpenStack Newton introduces the ability to reload (or ‘mutate’) certain configuration options at runtime without a service restart. The following projects support this:

  • Compute (nova)

Check individual options to discover if they are mutable.

In practice

A common use case is to enable debug logging after a failure. Use the mutable config option called ‘debug’ to do this (providing log_config_append has not been set). An admin user may perform the following steps:

  1. Log onto the compute node.
  2. Edit the config file (EG nova.conf) and change ‘debug’ to True.
  3. Send a SIGHUP signal to the nova process (For example, pkill -HUP nova).

A log message will be written out confirming that the option has been changed. If you use a CMS like Ansible, Chef, or Puppet, we recommend scripting these steps through your CMS.

OpenStack is a collection of open source project components that enable setting up cloud services. Each component uses similar configuration techniques and a common framework for INI file options.

This guide pulls together multiple references and configuration options for the following OpenStack components:

  • Bare Metal service
  • Block Storage service
  • Compute service
  • Dashboard
  • Database service
  • Data Processing service
  • Identity service
  • Image service
  • Message service
  • Networking service
  • Object Storage service
  • Orchestration service
  • Shared File Systems service
  • Telemetry service

Also, OpenStack uses many shared service and libraries, such as database connections and RPC messaging, whose configuration options are described at Common configurations.

Common configurations

This chapter describes the common configurations for shared service and libraries.

Authentication and authorization

All requests to the API may only be performed by an authenticated agent.

The preferred authentication system is Identity service.

Identity service authentication

To authenticate, an agent issues an authentication request to an Identity service endpoint. In response to valid credentials, Identity service responds with an authentication token and a service catalog that contains a list of all services and endpoints available for the given token.

Multiple endpoints may be returned for each OpenStack service according to physical locations and performance/availability characteristics of different deployments.

Normally, Identity service middleware provides the X-Project-Id header based on the authentication token submitted by the service client.

For this to work, clients must specify a valid authentication token in the X-Auth-Token header for each request to each OpenStack service API. The API validates authentication tokens against Identity service before servicing each request.

No authentication

If authentication is not enabled, clients must provide the X-Project-Id header themselves.

Options

Configure the authentication and authorization strategy through these options:

Description of authentication configuration options
Configuration option = Default value Description
[DEFAULT]  
auth_strategy = keystone (String) This determines the strategy to use for authentication: keystone or noauth2. ‘noauth2’ is designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username.
Description of authorization token configuration options
Configuration option = Default value Description
[keystone_authtoken]  
admin_password = None (String) Service user password.
admin_tenant_name = admin (String) Service tenant name.
admin_token = None (String) This option is deprecated and may be removed in a future release. Single shared secret with the Keystone configuration used for bootstrapping a Keystone installation, or otherwise bypassing the normal authentication process. This option should not be used, use admin_user and admin_password instead.
admin_user = None (String) Service username.
auth_admin_prefix = (String) Prefix to prepend at the beginning of the path. Deprecated, use identity_uri.
auth_host = 127.0.0.1 (String) Host providing the admin Identity API endpoint. Deprecated, use identity_uri.
auth_port = 35357 (Integer) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https (String) Protocol of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
auth_uri = None (String) Complete “public” Identity API endpoint. This endpoint should not be an “admin” endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint.
auth_version = None (String) API version of the admin Identity API endpoint.
cache = None (String) Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead.
cafile = None (String) A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs.
certfile = None (String) Required if identity server requires client certificate
check_revocations_for_cached = False (Boolean) If true, the revocation list will be checked for cached tokens. This requires that PKI tokens are configured on the identity server.
delay_auth_decision = False (Boolean) Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components.
enforce_token_bind = permissive (String) Used to control the use and type of token binding. Can be set to: “disabled” to not check token binding. “permissive” (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. “strict” like “permissive” but if the bind type is unknown the token will be rejected. “required” any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens.
hash_algorithms = md5 (List) Hash algorithms to use for hashing PKI tokens. This may be a single algorithm or multiple. The algorithms are those supported by Python standard hashlib.new(). The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first hash will be stored in the cache. This will typically be set to multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are expired this option should be set to a single value for better performance.
http_connect_timeout = None (Integer) Request timeout value for communicating with Identity API server.
http_request_max_retries = 3 (Integer) How many times are we trying to reconnect when communicating with Identity API Server.
identity_uri = None (String) Complete admin Identity API endpoint. This should specify the unversioned root endpoint e.g. https://localhost:35357/
include_service_catalog = True (Boolean) (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) Required if identity server requires client certificate
memcache_pool_conn_get_timeout = 10 (Integer) (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool.
memcache_pool_dead_retry = 300 (Integer) (Optional) Number of seconds memcached server is considered dead before it is tried again.
memcache_pool_maxsize = 10 (Integer) (Optional) Maximum total number of open connections to every memcached server.
memcache_pool_socket_timeout = 3 (Integer) (Optional) Socket timeout in seconds for communicating with a memcached server.
memcache_pool_unused_timeout = 60 (Integer) (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed.
memcache_secret_key = None (String) (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation.
memcache_security_strategy = None (String) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization.
memcache_use_advanced_pool = False (Boolean) (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x.
memcached_servers = None (List) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
region_name = None (String) The region in which the identity server can be found.
revocation_cache_time = 10 (Integer) Determines the frequency at which the list of revoked tokens is retrieved from the Identity service (in seconds). A high number of revocation events combined with a low cache duration may significantly reduce performance. Only valid for PKI tokens.
signing_dir = None (String) Directory used to cache files related to PKI tokens.
token_cache_time = 300 (Integer) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.

Cache configurations

The cache configuration options allow the deployer to control how an application uses this library.

These options are supported by:

  • Compute service
  • Identity service
  • Message service
  • Networking service
  • Orchestration service

For a complete list of all available cache configuration options, see olso.cache configuration options.

Database configurations

You can configure OpenStack services to use any SQLAlchemy-compatible database.

To ensure that the database schema is current, run the following command:

# SERVICE-manage db sync

To configure the connection string for the database, use the configuration option settings documented in the table Description of database configuration options.

Description of database configuration options
Configuration option = Default value Description
[DEFAULT]  
db_driver = SERVICE.db (String) DEPRECATED: The driver to use for database access
[database]  
backend = sqlalchemy (String) The back end to use for the database.
connection = None (String) The SQLAlchemy connection string to use to connect to the database.
connection_debug = 0 (Integer) Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace = False (Boolean) Add Python stack traces to SQL as comment strings.
db_inc_retry_interval = True (Boolean) If True, increases the interval between retries of a database operation up to db_max_retry_interval.
db_max_retries = 20 (Integer) Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10 (Integer) If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
db_retry_interval = 1 (Integer) Seconds between retries of a database transaction.
idle_timeout = 3600 (Integer) Timeout before idle SQL connections are reaped.
max_overflow = 50 (Integer) If set, use this value for max_overflow with SQLAlchemy.
max_pool_size = None (Integer) Maximum number of SQL connections to keep open in a pool.
max_retries = 10 (Integer) Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count.
min_pool_size = 1 (Integer) Minimum number of SQL connections to keep open in a pool.
mysql_sql_mode = TRADITIONAL (String) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode=
pool_timeout = None (Integer) If set, use this value for pool_timeout with SQLAlchemy.
retry_interval = 10 (Integer) Interval between retries of opening a SQL connection.
slave_connection = None (String) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite (String) The file name to use with SQLite.
sqlite_synchronous = True (Boolean) If True, SQLite uses synchronous mode.
use_db_reconnect = False (Boolean) Enable the experimental use of database reconnect on connection lost.
use_tpool = False (Boolean) Enable the experimental use of thread pooling for all DB API calls

Logging configurations

You can configure where the service logs events, the level of logging, and log formats.

To customize logging for the service, use the configuration option settings documented in the table Description of common logging configuration options.

Description of common logging configuration options
Configuration option = Default value Description
[DEFAULT]  
debug = False (Boolean) If set to true, the logging level will be set to DEBUG instead of the default INFO level.
default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN, taskflow=WARN, keystoneauth=WARN, oslo.cache=INFO, dogpile.core.dogpile=INFO (List) List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set.
fatal_deprecations = False (Boolean) Enables or disables fatal status of deprecations.
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal
instance_format = "[instance: %(uuid)s] " (String) The format for an instance that is passed with the log message.
instance_uuid_format = "[instance: %(uuid)s] " (String) The format for an instance UUID that is passed with the log message.
log_config_append = None (String) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, logging_context_format_string).
log_date_format = %Y-%m-%d %H:%M:%S (String) Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
log_dir = None (String) (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set.
log_file = None (String) (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set.
logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s (String) Format string to use for log messages with context.
logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d (String) Additional data to append to log message when logging level for the message is DEBUG.
logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s (String) Format string to use for log messages when context is undefined.
logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s (String) Prefix each line of exception output with this format.
logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s (String) Defines the format string for %(user_identity)s that is used in logging_context_format_string.
publish_errors = False (Boolean) Enables or disables publication of error events.
syslog_log_facility = LOG_USER (String) Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_stderr = True (Boolean) Log output to standard error. This option is ignored if log_config_append is set.
use_syslog = False (Boolean) Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set.
verbose = True (Boolean) DEPRECATED: If set to false, the logging level will be set to WARNING instead of the default INFO level.
watch_log_file = False (Boolean) Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set.

Policy configurations

The policy configuration options allow the deployer to control where the policy files are located and the default rule to apply when policy.

Description of policy configuration options
Configuration option = Default value Description
[oslo_policy]  
policy_default_rule = default (String) Default rule. Enforced when a requested rule is not found.
policy_dirs = ['policy.d'] (Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
policy_file = policy.json (String) The JSON file that defines policies.

RPC messaging configurations

OpenStack services use Advanced Message Queuing Protocol (AMQP), an open standard for messaging middleware. This messaging middleware enables the OpenStack services that run on multiple servers to talk to each other. OpenStack Oslo RPC supports two implementations of AMQP: RabbitMQ and ZeroMQ.

Configure messaging

Use these options to configure the RPC messaging driver.

Description of AMQP configuration options
Configuration option = Default value Description
[DEFAULT]  
control_exchange = openstack (String) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
default_publisher_id = None (String) Default publisher_id for outgoing notifications
transport_url = None (String) A URL representing the messaging driver to use and its full configuration. If not set, we fall back to the rpc_backend option and driver specific configuration.
Description of RPC configuration options
Configuration option = Default value Description
[DEFAULT]  
notification_format = both (String) Specifies which notification format shall be used by nova.
rpc_backend = rabbit (String) The messaging driver to use, defaults to rabbit. Other drivers include amqp and zmq.
rpc_cast_timeout = -1 (Integer) Seconds to wait before a cast expires (TTL). The default value of -1 specifies an infinite linger period. The value of 0 specifies no linger period. Pending messages shall be discarded immediately when the socket is closed. Only supported by impl_zmq.
rpc_conn_pool_size = 30 (Integer) Size of RPC connection pool.
rpc_poll_timeout = 1 (Integer) The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired.
rpc_response_timeout = 60 (Integer) Seconds to wait for a response from a call.
[cells]  
rpc_driver_queue_base = cells.intercell (String) RPC driver queue base When sending a message to another cell by JSON-ifying the message and making an RPC cast to ‘process_message’, a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this. Possible values: * The base queue name to be used when communicating between cells. Services which consume this: * nova-cells Related options: * None
[oslo_concurrency]  
disable_process_locking = False (Boolean) Enables or disables inter-process locks.
lock_path = None (String) Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
[oslo_messaging]  
event_stream_topic = neutron_lbaas_event (String) topic name for receiving events from a queue
[oslo_messaging_amqp]  
allow_insecure_clients = False (Boolean) Accept clients using either SSL or plain TCP
broadcast_prefix = broadcast (String) address prefix used when broadcasting to all servers
container_name = None (String) Name for the AMQP container
group_request_prefix = unicast (String) address prefix when sending to any server in group
idle_timeout = 0 (Integer) Timeout for inactive connections (in seconds)
password = (String) Password for message broker authentication
sasl_config_dir = (String) Path to directory that contains the SASL configuration
sasl_config_name = (String) Name of configuration file (without .conf suffix)
sasl_mechanisms = (String) Space separated list of acceptable SASL mechanisms
server_request_prefix = exclusive (String) address prefix used when sending to a specific server
ssl_ca_file = (String) CA certificate PEM file to verify server certificate
ssl_cert_file = (String) Identifying certificate PEM file to present to clients
ssl_key_file = (String) Private key PEM file used to sign cert_file certificate
ssl_key_password = None (String) Password for decrypting ssl_key_file (if encrypted)
trace = False (Boolean) Debug: dump AMQP frames to stdout
username = (String) User name for message broker authentication
[oslo_messaging_notifications]  
driver = [] (Multi-valued) The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop
topics = notifications (List) AMQP topic used for OpenStack notifications.
transport_url = None (String) A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC.
[upgrade_levels]  
baseapi = None (String) Set a version cap for messages sent to the base api in any service
Configure RabbitMQ

OpenStack Oslo RPC uses RabbitMQ by default. The rpc_backend option is not required as long as RabbitMQ is the default messaging system. However, if it is included in the configuration, you must set it to rabbit:

rpc_backend = rabbit

You can configure messaging communication for different installation scenarios, tune retries for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the notification_driver option to nova.openstack.common.notifier.rpc_notifier. The default value for sending usage data is sixty seconds plus a random number of seconds from zero to sixty.

Use the options described in the table below to configure the RabbitMQ message system.

Description of RabbitMQ configuration options
Configuration option = Default value Description
[oslo_messaging_rabbit]  
amqp_auto_delete = False (Boolean) Auto-delete queues in AMQP.
amqp_durable_queues = False (Boolean) Use durable queues in AMQP.
channel_max = None (Integer) Maximum number of channels to allow
default_notification_exchange = ${control_exchange}_notification (String) Exchange name for for sending notifications
default_notification_retry_attempts = -1 (Integer) Reconnecting retry count in case of connectivity problem during sending notification, -1 means infinite retry.
default_rpc_exchange = ${control_exchange}_rpc (String) Exchange name for sending RPC messages
default_rpc_retry_attempts = -1 (Integer) Reconnecting retry count in case of connectivity problem during sending RPC message, -1 means infinite retry. If actual retry attempts in not 0 the rpc request could be processed more then one time
fake_rabbit = False (Boolean) Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
frame_max = None (Integer) The maximum byte size for an AMQP frame
heartbeat_interval = 1 (Integer) How often to send heartbeats for consumer’s connections
heartbeat_rate = 2 (Integer) How often times during the heartbeat_timeout_threshold we check the heartbeat.
heartbeat_timeout_threshold = 60 (Integer) Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disable the heartbeat). EXPERIMENTAL
host_connection_reconnect_delay = 0.25 (Floating point) Set delay for reconnection to some host which has connection error
kombu_compression = None (String) EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may notbe available in future versions.
kombu_failover_strategy = round-robin (String) Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.
kombu_missing_consumer_retry_timeout = 60 (Integer) How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout.
kombu_reconnect_delay = 1.0 (Floating point) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs = (String) SSL certification authority file (valid only if SSL enabled).
kombu_ssl_certfile = (String) SSL cert file (valid only if SSL enabled).
kombu_ssl_keyfile = (String) SSL key file (valid only if SSL enabled).
kombu_ssl_version = (String) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
notification_listener_prefetch_count = 100 (Integer) Max number of not acknowledged message which RabbitMQ can send to notification listener.
notification_persistence = False (Boolean) Persist notification messages.
notification_retry_delay = 0.25 (Floating point) Reconnecting retry delay in case of connectivity problem during sending notification message
pool_max_overflow = 0 (Integer) Maximum number of connections to create above pool_max_size.
pool_max_size = 10 (Integer) Maximum number of connections to keep queued.
pool_recycle = 600 (Integer) Lifetime of a connection (since creation) in seconds or None for no recycling. Expired connections are closed on acquire.
pool_stale = 60 (Integer) Threshold at which inactive (since release) connections are considered stale in seconds or None for no staleness. Stale connections are closed on acquire.
pool_timeout = 30 (Integer) Default number of seconds to wait for a connections to available
rabbit_ha_queues = False (Boolean) Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: “rabbitmqctl set_policy HA ‘^(?!amq.).*’ ‘{“ha-mode”: “all”}’ “
rabbit_host = localhost (String) The RabbitMQ broker address where a single node is used.
rabbit_hosts = $rabbit_host:$rabbit_port (List) RabbitMQ HA cluster host:port pairs.
rabbit_interval_max = 30 (Integer) Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
rabbit_login_method = AMQPLAIN (String) The RabbitMQ login method.
rabbit_max_retries = 0 (Integer) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest (String) The RabbitMQ password.
rabbit_port = 5672 (Port number) The RabbitMQ broker port where a single node is used.
rabbit_qos_prefetch_count = 0 (Integer) Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.
rabbit_retry_backoff = 2 (Integer) How long to backoff for between retries when connecting to RabbitMQ.
rabbit_retry_interval = 1 (Integer) How frequently to retry connecting with RabbitMQ.
rabbit_transient_queues_ttl = 1800 (Integer) Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues.
rabbit_use_ssl = False (Boolean) Connect over SSL for RabbitMQ.
rabbit_userid = guest (String) The RabbitMQ userid.
rabbit_virtual_host = / (String) The RabbitMQ virtual host.
rpc_listener_prefetch_count = 100 (Integer) Max number of not acknowledged message which RabbitMQ can send to rpc listener.
rpc_queue_expiration = 60 (Integer) Time to live for rpc queues without consumers in seconds.
rpc_reply_exchange = ${control_exchange}_rpc_reply (String) Exchange name for receiving RPC replies
rpc_reply_listener_prefetch_count = 100 (Integer) Max number of not acknowledged message which RabbitMQ can send to rpc reply listener.
rpc_reply_retry_attempts = -1 (Integer) Reconnecting retry count in case of connectivity problem during sending reply. -1 means infinite retry during rpc_timeout
rpc_reply_retry_delay = 0.25 (Floating point) Reconnecting retry delay in case of connectivity problem during sending reply.
rpc_retry_delay = 0.25 (Floating point) Reconnecting retry delay in case of connectivity problem during sending RPC message
socket_timeout = 0.25 (Floating point) Set socket timeout in seconds for connection’s socket
ssl = None (Boolean) Enable SSL
ssl_options = None (Dict) Arguments passed to ssl.wrap_socket
tcp_user_timeout = 0.25 (Floating point) Set TCP_USER_TIMEOUT in seconds for connection’s socket
Configure ZeroMQ

Use these options to configure the ZeroMQ messaging system for OpenStack Oslo RPC. ZeroMQ is not the default messaging system, so you must enable it by setting the rpc_backend option.

Description of ZeroMQ configuration options
Configuration option = Default value Description
[DEFAULT]  
rpc_zmq_bind_address = * (String) ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. The “host” option should point or resolve to this address.
rpc_zmq_bind_port_retries = 100 (Integer) Number of retries to find free port number before fail with ZMQBindError.
rpc_zmq_concurrency = eventlet (String) Type of concurrency used. Either “native” or “eventlet”
rpc_zmq_contexts = 1 (Integer) Number of ZeroMQ contexts, defaults to 1.
rpc_zmq_host = localhost (String) Name of this node. Must be a valid hostname, FQDN, or IP address. Must match “host” option, if running Nova.
rpc_zmq_ipc_dir = /var/run/openstack (String) Directory for holding IPC sockets.
rpc_zmq_matchmaker = redis (String) MatchMaker driver.
rpc_zmq_max_port = 65536 (Integer) Maximal port number for random ports range.
rpc_zmq_min_port = 49152 (Port number) Minimal port number for random ports range.
rpc_zmq_topic_backlog = None (Integer) Maximum number of ingress messages to locally buffer per topic. Default is unlimited.
use_pub_sub = True (Boolean) Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
zmq_target_expire = 120 (Integer) Expiration timeout in seconds of a name service record about existing target ( < 0 means no timeout).

Cross-origin resource sharing

Cross-Origin Resource Sharing (CORS) is a mechanism that allows code running in a browser (JavaScript for example) to make requests to a domain, other than the one it was originated from. OpenStack services support CORS requests.

For more information, see cross-project features in OpenStack Administrator Guide, CORS in Dashboard, and CORS in Object Storage service.

For a complete list of all available CORS configuration options, see CORS configuration options.

Application Catalog service

Application Catalog API configuration

Configuration options

The Application Catalog service can be configured by changing the following options:

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_role = admin (String) Role used to identify an authenticated user as administrator.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
secure_proxy_ssl_header = X-Forwarded-Proto (String) The HTTP Header that will be used to determine which the original request protocol scheme was, even if it was removed by an SSL terminator proxy.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_policy]  
policy_default_rule = default (String) Default rule. Enforced when a requested rule is not found.
policy_dirs = ['policy.d'] (Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
policy_file = policy.json (String) The JSON file that defines policies.
[paste_deploy]  
config_file = None (String) Path to Paste config file
flavor = None (String) Paste flavor
Description of CFAPI configuration options
Configuration option = Default value Description
[cfapi]  
auth_url = localhost:5000 (String) Authentication URL
bind_host = localhost (String) Host for service broker
bind_port = 8083 (String) Port for service broker
packages_service = murano (String) Package service which should be used by service broker
project_domain_name = default (String) Domain name of the project
tenant = admin (String) Project for service broker
user_domain_name = default (String) Domain name of the user

Additional configuration options for Application Catalog service

These options can also be set in the murano.conf file.

Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
backlog = 4096 (Integer) Number of backlog requests to configure the socket with
bind_host = 0.0.0.0 (String) Address to bind the Murano API server to.
bind_port = 8082 (Port number) Port the bind the Murano API server to.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
file_server = (String) Set a file server.
home_region = None (String) Default region name used to get services endpoints.
metadata_dir = ./meta (String) Metadata dir
publish_errors = False (Boolean) Enables or disables publication of error events.
tcp_keepidle = 600 (Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
use_router_proxy = True (Boolean) Use ROUTER remote proxy.
[murano]  
api_limit_max = 100 (Integer) Maximum number of packages to be returned in a single pagination request
api_workers = None (Integer) Number of API workers
cacert = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Murano API used by Murano engine.
cert_file = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Murano used by Murano engine.
enabled_plugins = None (List) List of enabled Extension Plugins. Remove or leave commented to enable all installed plugins.
endpoint_type = publicURL (String) Murano endpoint type used by Murano engine.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers used by Murano engine.
key_file = None (String) (SSL/SSH) Private key file name to communicate with Murano API used by Murano engine.
limit_param_default = 20 (Integer) Default value for package pagination in API.
package_size_limit = 5 (Integer) Maximum application package size, Mb
url = None (String) Optional murano url in format like http://0.0.0.0:8082 used by Murano engine
[stats]  
period = 5 (Integer) Statistics collection interval in minutes.Default value is 5 minutes.
Description of engine configuration options
Configuration option = Default value Description
[engine]  
agent_timeout = 3600 (Integer) Time for waiting for a response from murano agent during the deployment
class_configs = /etc/murano/class-configs (String) Path to class configuration files
disable_murano_agent = False (Boolean) Disallow the use of murano-agent
enable_model_policy_enforcer = False (Boolean) Enable model policy enforcer using Congress
enable_packages_cache = True (Boolean) Enables murano-engine to persist on disk packages downloaded during deployments. The packages would be re-used for consequent deployments.
engine_workers = None (Integer) Number of engine workers
load_packages_from = (List) List of directories to load local packages from. If not provided, packages will be loaded only API
packages_cache = None (String) Location (directory) for Murano package cache.
packages_service = murano (String) The service to store murano packages: murano (stands for legacy behavior using murano-api) or glance (stands for glance-glare artifact service)
use_trusts = True (Boolean) Create resources using trust token rather than user’s token
Description of glare configuration options
Configuration option = Default value Description
[glare]  
ca_file = None (String) (SSL) Tells Murano to use the specified certificate file to verify the peer running Glare API.
cert_file = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Glare.
endpoint_type = publicURL (String) Glare endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Glare API.
key_file = None (String) (SSL/SSH) Private key file name to communicate with Glare API.
url = None (String) Optional glare url in format like http://0.0.0.0:9494 used by Glare API
Description of Orchestration service configuration options
Configuration option = Default value Description
[heat]  
ca_file = None (String) (SSL) Tells Murano to use the specified certificate file to verify the peer running Heat API.
cert_file = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Heat.
endpoint_type = publicURL (String) Heat endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Heat API.
key_file = None (String) (SSL/SSH) Private key file name to communicate with Heat API.
stack_tags = murano (List) List of tags to be assigned to heat stacks created during environment deployment.
url = None (String) Optional heat endpoint override
Description of Workflow service configuration options
Configuration option = Default value Description
[mistral]  
ca_cert = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Mistral.
endpoint_type = publicURL (String) Mistral endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Mistral.
service_type = workflowv2 (String) Mistral service type.
url = None (String) Optional mistral endpoint override
Description of Networking service configuration options
Configuration option = Default value Description
[networking]  
create_router = True (Boolean) This option will create a router when one with “router_name” does not exist
default_dns = (List) List of default DNS nameservers to be assigned to created Networks
driver = None (String) Network driver to use. Options are neutron or nova.If not provided, the driver will be detected.
env_ip_template = 10.0.0.0 (String) Template IP address for generating environment subnet cidrs
external_network = ext-net (String) ID or name of the external network for routers to connect to
max_environments = 250 (Integer) Maximum number of environments that use a single router per tenant
max_hosts = 250 (Integer) Maximum number of VMs per environment
network_config_file = netconfig.yaml (String) If provided networking configuration will be taken from this file
router_name = murano-default-router (String) Name of the router that going to be used in order to join all networks created by Murano
[neutron]  
ca_cert = None (String) (SSL) Tells Murano to use the specified client certificate file when communicating with Neutron.
endpoint_type = publicURL (String) Neutron endpoint type.
insecure = False (Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Neutron API.
url = None (String) Optional neutron endpoint override
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.

New, updated, and deprecated options in Newton for Application Catalog service

New options
Option = default value (Type) Help string
[cfapi] packages_service = murano (StrOpt) Package service which should be used by service broker
[engine] engine_workers = None (IntOpt) Number of engine workers
[murano] api_workers = None (IntOpt) Number of API workers
[networking] driver = None (StrOpt) Network driver to use. Options are neutron or nova.If not provided, the driver will be detected.
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None
[engine] workers [engine] engine_workers

This chapter describes the Application Catalog service configuration options.

Note

The common configurations for shared services and libraries, such as database connections and RPC messaging, are described at Common configurations.

Bare Metal service

Bare Metal API configuration

Configuration options

The following options allow configuration of the APIs that Bare Metal service supports.

Description of API configuration options
Configuration option = Default value Description
[api]  
api_workers = None (Integer) Number of workers for OpenStack Ironic API service. The default is equal to the number of CPUs available if that can be determined, else a default worker count of 1 is returned.
enable_ssl_api = False (Boolean) Enable the integrated stand-alone API to service requests via HTTPS instead of HTTP. If there is a front-end service performing HTTPS offloading from the service, this option should be False; note, you will want to change public API endpoint to represent SSL termination URL with ‘public_endpoint’ option.
host_ip = 0.0.0.0 (String) The IP address on which ironic-api listens.
max_limit = 1000 (Integer) The maximum number of items returned in a single response from a collection resource.
port = 6385 (Port number) The TCP port on which ironic-api listens.
public_endpoint = None (String) Public URL to use when building the links to the API resources (for example, “https://ironic.rocks:6384”). If None the links will be built using the request’s host URL. If the API is operating behind a proxy, you will want to change this to represent the proxy’s URL. Defaults to None.
ramdisk_heartbeat_timeout = 300 (Integer) Maximum interval (in seconds) for agent heartbeats.
restrict_lookup = True (Boolean) Whether to restrict the lookup API to only nodes in certain states.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_versionedobjects]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal

Additional configuration options for Bare Metal service

The following tables provide a comprehensive list of the Bare Metal service configuration options.

Description of agent configuration options
Configuration option = Default value Description
[agent]  
agent_api_version = v1 (String) API version to use for communicating with the ramdisk agent.
deploy_logs_collect = on_failure (String) Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never.
deploy_logs_local_path = /var/log/ironic/deploy (String) The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to “local”.
deploy_logs_storage_backend = local (String) The name of the storage backend where the logs will be stored.
deploy_logs_swift_container = ironic_deploy_logs_container (String) The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to “swift”.
deploy_logs_swift_days_to_expire = 30 (Integer) Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to “swift”.
manage_agent_boot = True (Boolean) Whether Ironic will manage booting of the agent ramdisk. If set to False, you will need to configure your mechanism to allow booting the agent ramdisk.
memory_consumed_by_agent = 0 (Integer) The memory size in MiB consumed by agent when it is booted on a bare metal node. This is used for checking if the image can be downloaded and deployed on the bare metal node after booting agent ramdisk. This may be set according to the memory consumed by the agent ramdisk image.
post_deploy_get_power_state_retries = 6 (Integer) Number of times to retry getting power state to check if bare metal node has been powered off after a soft power off.
post_deploy_get_power_state_retry_interval = 5 (Integer) Amount of time (in seconds) to wait between polling power state after trigger soft poweroff.
stream_raw_images = True (Boolean) Whether the agent ramdisk should stream raw images directly onto the disk or not. By streaming raw images directly onto the disk the agent ramdisk will not spend time copying the image to a tmpfs partition (therefore consuming less memory) prior to writing it to the disk. Unless the disk where the image will be copied to is really slow, this option should be set to True. Defaults to True.
Description of AMT configuration options
Configuration option = Default value Description
[amt]  
action_wait = 10 (Integer) Amount of time (in seconds) to wait, before retrying an AMT operation
awake_interval = 60 (Integer) Time interval (in seconds) for successive awake call to AMT interface, this depends on the IdleTimeout setting on AMT interface. AMT Interface will go to sleep after 60 seconds of inactivity by default. IdleTimeout=0 means AMT will not go to sleep at all. Setting awake_interval=0 will disable awake call.
max_attempts = 3 (Integer) Maximum number of times to attempt an AMT operation, before failing
protocol = http (String) Protocol used for AMT endpoint
Description of audit configuration options
Configuration option = Default value Description
[audit]  
audit_map_file = /etc/ironic/ironic_api_audit_map.conf (String) Path to audit map file for ironic-api service. Used only when API audit is enabled.
enabled = False (Boolean) Enable auditing of API requests (for ironic-api service).
ignore_req_list = None (String) Comma separated list of Ironic REST API HTTP methods to be ignored during audit. For example: auditing will not be done on any GET or POST requests if this is set to “GET,POST”. It is used only when API audit is enabled.
namespace = openstack (String) namespace prefix for generated id
[audit_middleware_notifications]  
driver = None (String) The Driver to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop. If not specified, then value from oslo_messaging_notifications conf section is used.
topics = None (List) List of AMQP topics used for OpenStack notifications. If not specified, then value from oslo_messaging_notifications conf section is used.
transport_url = None (String) A URL representing messaging driver to use for notification. If not specified, we fall back to the same configuration used for RPC.
Description of Cisco UCS configuration options
Configuration option = Default value Description
[cimc]  
action_interval = 10 (Integer) Amount of time in seconds to wait in between power operations
max_retry = 6 (Integer) Number of times a power operation needs to be retried
[cisco_ucs]  
action_interval = 5 (Integer) Amount of time in seconds to wait in between power operations
max_retry = 6 (Integer) Number of times a power operation needs to be retried
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
bindir = /usr/local/bin (String) Directory where ironic binaries are installed.
debug_tracebacks_in_api = False (Boolean) Return server tracebacks in the API response for any error responses. WARNING: this is insecure and should not be used in a production environment.
default_network_interface = None (String) Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint.
enabled_drivers = pxe_ipmitool (List) Specify the list of drivers to load during service initialization. Missing drivers, or drivers which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented drivers. A complete list of drivers present on your system may be found by enumerating the “ironic.drivers” entrypoint. An example may be found in the developer documentation online.
enabled_network_interfaces = flat, noop (List) Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. This value must be the same on all ironic-conductor and ironic-api services, because it is used by ironic-api service to validate a new or updated node’s network_interface value.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
fatal_exception_format_errors = False (Boolean) Used if there is a formatting error when generating an exception message (a programming error). If True, raise an exception; if False, use the unformatted message.
force_raw_images = True (Boolean) If True, convert backing images to “raw” disk image format.
grub_config_template = $pybasedir/common/grub_conf.template (String) Template file for grub configuration file.
hash_distribution_replicas = 1 (Integer) [Experimental Feature] Number of hosts to map onto each hash partition. Setting this to more than one will cause additional conductor services to prepare deployment environments and potentially allow the Ironic cluster to recover more quickly if a conductor instance is terminated.
hash_partition_exponent = 5 (Integer) Exponent to determine number of hash partitions to use when distributing load across conductors. Larger values will result in more even distribution of load and less load when rebalancing the ring, but more memory usage. Number of partitions per conductor is (2^hash_partition_exponent). This determines the granularity of rebalancing: given 10 hosts, and an exponent of the 2, there are 40 partitions in the ring.A few thousand partitions should make rebalancing smooth in most cases. The default is suitable for up to a few hundred conductors. Too many partitions has a CPU impact.
hash_ring_reset_interval = 180 (Integer) Interval (in seconds) between hash ring resets.
host = localhost (String) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address.
isolinux_bin = /usr/lib/syslinux/isolinux.bin (String) Path to isolinux binary file.
isolinux_config_template = $pybasedir/common/isolinux_config.template (String) Template file for isolinux configuration file.
my_ip = 127.0.0.1 (String) IP address of this host. If unset, will determine the IP programmatically. If unable to do so, will use “127.0.0.1”.
notification_level = None (String) Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset.
parallel_image_downloads = False (Boolean) Run image downloads and raw format conversions in parallel.
pybasedir = /usr/lib/python/site-packages/ironic/ironic (String) Directory where the ironic python module is installed.
rootwrap_config = /etc/ironic/rootwrap.conf (String) Path to the rootwrap configuration file to use for running commands as root.
state_path = $pybasedir (String) Top-level directory for maintaining ironic’s state.
tempdir = /tmp (String) Temporary working directory, default is Python temp dir.
[ironic_lib]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal.
root_helper = sudo ironic-rootwrap /etc/ironic/rootwrap.conf (String) Command that is prefixed to commands that are run as root. If not specified, no commands are run as root.
Description of conductor configuration options
Configuration option = Default value Description
[conductor]  
api_url = None (String) URL of Ironic API service. If not set ironic can get the current value from the keystone service catalog.
automated_clean = True (Boolean) Enables or disables automated cleaning. Automated cleaning is a configurable set of steps, such as erasing disk drives, that are performed on the node to ensure it is in a baseline state and ready to be deployed to. This is done after instance deletion as well as during the transition from a “manageable” to “available” state. When enabled, the particular steps performed to clean a node depend on which driver that node is managed by; see the individual driver’s documentation for details. NOTE: The introduction of the cleaning operation causes instance deletion to take significantly longer. In an environment where all tenants are trusted (eg, because there is only one tenant), this option could be safely disabled.
check_provision_state_interval = 60 (Integer) Interval between checks of provision timeouts, in seconds.
clean_callback_timeout = 1800 (Integer) Timeout (seconds) to wait for a callback from the ramdisk doing the cleaning. If the timeout is reached the node will be put in the “clean failed” provision state. Set to 0 to disable timeout.
configdrive_swift_container = ironic_configdrive_container (String) Name of the Swift container to store config drive data. Used when configdrive_use_swift is True.
configdrive_use_swift = False (Boolean) Whether to upload the config drive to Swift.
deploy_callback_timeout = 1800 (Integer) Timeout (seconds) to wait for a callback from a deploy ramdisk. Set to 0 to disable timeout.
force_power_state_during_sync = True (Boolean) During sync_power_state, should the hardware power state be set to the state recorded in the database (True) or should the database be updated based on the hardware state (False).
heartbeat_interval = 10 (Integer) Seconds between conductor heart beats.
heartbeat_timeout = 60 (Integer) Maximum time (in seconds) since the last check-in of a conductor. A conductor is considered inactive when this time has been exceeded.
inspect_timeout = 1800 (Integer) Timeout (seconds) for waiting for node inspection. 0 - unlimited.
node_locked_retry_attempts = 3 (Integer) Number of attempts to grab a node lock.
node_locked_retry_interval = 1 (Integer) Seconds to sleep between node lock attempts.
periodic_max_workers = 8 (Integer) Maximum number of worker threads that can be started simultaneously by a periodic task. Should be less than RPC thread pool size.
power_state_sync_max_retries = 3 (Integer) During sync_power_state failures, limit the number of times Ironic should try syncing the hardware node power state with the node power state in DB
send_sensor_data = False (Boolean) Enable sending sensor data message via the notification bus
send_sensor_data_interval = 600 (Integer) Seconds between conductor sending sensor data message to ceilometer via the notification bus.
send_sensor_data_types = ALL (List) List of comma separated meter types which need to be sent to Ceilometer. The default value, “ALL”, is a special value meaning send all the sensor data.
sync_local_state_interval = 180 (Integer) When conductors join or leave the cluster, existing conductors may need to update any persistent local state as nodes are moved around the cluster. This option controls how often, in seconds, each conductor will check for nodes that it should “take over”. Set it to a negative value to disable the check entirely.
sync_power_state_interval = 60 (Integer) Interval between syncing the node power state to the database, in seconds.
workers_pool_size = 100 (Integer) The size of the workers greenthread pool. Note that 2 threads will be reserved by the conductor itself for handling heart beats and periodic tasks.
Description of console configuration options
Configuration option = Default value Description
[console]  
subprocess_checking_interval = 1 (Integer) Time interval (in seconds) for checking the status of console subprocess.
subprocess_timeout = 10 (Integer) Time (in seconds) to wait for the console subprocess to start.
terminal = shellinaboxd (String) Path to serial console terminal program. Used only by Shell In A Box console.
terminal_cert_dir = None (String) Directory containing the terminal SSL cert (PEM) for serial console access. Used only by Shell In A Box console.
terminal_pid_dir = None (String) Directory for holding terminal pid files. If not specified, the temporary directory will be used.
Description of DRAC configuration options
Configuration option = Default value Description
[drac]  
query_raid_config_job_status_interval = 120 (Integer) Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not.
Description of logging configuration options
Configuration option = Default value Description
[DEFAULT]  
pecan_debug = False (Boolean) Enable pecan debug mode. WARNING: this is insecure and should not be used in a production environment.
Description of deploy configuration options
Configuration option = Default value Description
[deploy]  
continue_if_disk_secure_erase_fails = False (Boolean) Defines what to do if an ATA secure erase operation fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue.
erase_devices_metadata_priority = None (Integer) Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning.
erase_devices_priority = None (Integer) Priority to run in-band erase devices via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 10 for the GenericHardwareManager). If set to 0, will not run during cleaning.
http_root = /httpboot (String) ironic-conductor node’s HTTP root path.
http_url = None (String) ironic-conductor node’s HTTP server URL. Example: http://192.1.2.3:8080
power_off_after_deploy_failure = True (Boolean) Whether to power off a node after deploy failure. Defaults to True.
shred_final_overwrite_with_zeros = True (Boolean) Whether to write zeros to a node’s block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True.
shred_random_overwrite_iterations = 1 (Integer) During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1.
Description of DHCP configuration options
Configuration option = Default value Description
[dhcp]  
dhcp_provider = neutron (String) DHCP provider to use. “neutron” uses Neutron, and “none” uses a no-op provider.
Description of disk partitioner configuration options
Configuration option = Default value Description
[disk_partitioner]  
check_device_interval = 1 (Integer) After Ironic has completed creating the partition table, it continues to check for activity on the attached iSCSI device status at this interval prior to copying the image to the node, in seconds
check_device_max_retries = 20 (Integer) The maximum number of times to check that the device is not accessed by another process. If the device is still busy after that, the disk partitioning will be treated as having failed.
[disk_utils]  
bios_boot_partition_size = 1 (Integer) Size of BIOS Boot partition in MiB when configuring GPT partitioned systems for local boot in BIOS.
dd_block_size = 1M (String) Block size to use when writing to the nodes disk.
efi_system_partition_size = 200 (Integer) Size of EFI system partition in MiB when configuring UEFI systems for local boot.
iscsi_verify_attempts = 3 (Integer) Maximum attempts to verify an iSCSI connection is active, sleeping 1 second between attempts.
Description of glance configuration options
Configuration option = Default value Description
[glance]  
allowed_direct_url_schemes = (List) A list of URL schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file].
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_strategy = keystone (String) Authentication strategy to use when connecting to glance.
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
glance_api_insecure = False (Boolean) Allow to perform insecure SSL (https) requests to glance.
glance_api_servers = None (List) A list of the glance api servers available to ironic. Prefix with https:// for SSL-based glance API servers. Format is [hostname|IP]:port.
glance_cafile = None (String) Optional path to a CA certificate bundle to be used to validate the SSL certificate served by glance. It is used when glance_api_insecure is set to False.
glance_host = $my_ip (String) Default glance hostname or IP address.
glance_num_retries = 0 (Integer) Number of retries when downloading an image from glance.
glance_port = 9292 (Port number) Default glance port.
glance_protocol = http (String) Default protocol to use when connecting to glance. Set to https for SSL.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
swift_account = None (String) The account that Glance uses to communicate with Swift. The format is “AUTH_uuid”. “uuid” is the UUID for the account configured in the glance-api.conf. Required for temporary URLs when Glance backend is Swift. For example: “AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30”. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id”
swift_api_version = v1 (String) The Swift API version to create a temporary URL for. Defaults to “v1”. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id”
swift_container = glance (String) The Swift container Glance is configured to store its images in. Defaults to “glance”, which is the default in glance-api.conf. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id”
swift_endpoint_url = None (String) The “endpoint” (scheme, hostname, optional port) for the Swift URL of the form “endpoint_url/api_version/[account/]container/object_id”. Do not include trailing “/”. For example, use “https://swift.example.com”. If using RADOS Gateway, endpoint may also contain /swift path; if it does not, it will be appended. Required for temporary URLs.
swift_store_multiple_containers_seed = 0 (Integer) This should match a config by the same name in the Glance configuration file. When set to 0, a single-tenant store will only use one container to store all images. When set to an integer value between 1 and 32, a single-tenant store will use multiple containers to store images, and this value will determine how many containers are created.
swift_temp_url_cache_enabled = False (Boolean) Whether to cache generated Swift temporary URLs. Setting it to true is only useful when an image caching proxy is used. Defaults to False.
swift_temp_url_duration = 1200 (Integer) The length of time in seconds that the temporary URL will be valid for. Defaults to 20 minutes. If some deploys get a 401 response code when trying to download from the temporary URL, try raising this duration. This value must be greater than or equal to the value for swift_temp_url_expected_download_start_delay
swift_temp_url_expected_download_start_delay = 0 (Integer) This is the delay (in seconds) from the time of the deploy request (when the Swift temporary URL is generated) to when the IPA ramdisk starts up and URL is used for the image download. This value is used to check if the Swift temporary URL duration is large enough to let the image download begin. Also if temporary URL caching is enabled this will determine if a cached entry will still be valid when the download starts. swift_temp_url_duration value must be greater than or equal to this option’s value. Defaults to 0.
swift_temp_url_key = None (String) The secret token given to Swift to allow temporary URL downloads. Required for temporary URLs.
temp_url_endpoint_type = swift (String) Type of endpoint to use for temporary URLs. If the Glance backend is Swift, use “swift”; if it is CEPH with RADOS gateway, use “radosgw”.
timeout = None (Integer) Timeout value for http requests
Description of iBoot Web Power Switch configuration options
Configuration option = Default value Description
[iboot]  
max_retry = 3 (Integer) Maximum retries for iBoot operations
reboot_delay = 5 (Integer) Time (in seconds) to sleep between when rebooting (powering off and on again).
retry_interval = 1 (Integer) Time (in seconds) between retry attempts for iBoot operations
Description of iLO configuration options
Configuration option = Default value Description
[ilo]  
ca_file = None (String) CA certificate file to validate iLO.
clean_priority_clear_secure_boot_keys = 0 (Integer) Priority for clear_secure_boot_keys clean step. This step is not enabled by default. It can be enabled to clear all secure boot keys enrolled with iLO.
clean_priority_erase_devices = None (Integer) DEPRECATED: Priority for erase devices clean step. If unset, it defaults to 10. If set to 0, the step will be disabled and will not run during cleaning. This configuration option is duplicated by [deploy] erase_devices_priority, please use that instead.
clean_priority_reset_bios_to_default = 10 (Integer) Priority for reset_bios_to_default clean step.
clean_priority_reset_ilo = 0 (Integer) Priority for reset_ilo clean step.
clean_priority_reset_ilo_credential = 30 (Integer) Priority for reset_ilo_credential clean step. This step requires “ilo_change_password” parameter to be updated in nodes’s driver_info with the new password.
clean_priority_reset_secure_boot_keys_to_default = 20 (Integer) Priority for reset_secure_boot_keys clean step. This step will reset the secure boot keys to manufacturing defaults.
client_port = 443 (Port number) Port to be used for iLO operations
client_timeout = 60 (Integer) Timeout (in seconds) for iLO operations
default_boot_mode = auto (String) Default boot mode to be used in provisioning when “boot_mode” capability is not provided in the “properties/capabilities” of the node. The default is “auto” for backward compatibility. When “auto” is specified, default boot mode will be selected based on boot mode settings on the system.
power_retry = 6 (Integer) Number of times a power operation needs to be retried
power_wait = 2 (Integer) Amount of time in seconds to wait in between power operations
swift_ilo_container = ironic_ilo_container (String) The Swift iLO container to store data.
swift_object_expiry_timeout = 900 (Integer) Amount of time in seconds for Swift objects to auto-expire.
use_web_server_for_images = False (Boolean) Set this to True to use http web server to host floppy images and generated boot ISO. This requires http_root and http_url to be configured in the [deploy] section of the config file. If this is set to False, then Ironic will use Swift to host the floppy images and generated boot_iso.
Description of inspector configuration options
Configuration option = Default value Description
[inspector]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
enabled = False (Boolean) whether to enable inspection using ironic-inspector
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
service_url = None (String) ironic-inspector HTTP endpoint. If this is not set, the service catalog will be used.
status_check_period = 60 (Integer) period (in seconds) to check status of nodes on inspection
timeout = None (Integer) Timeout value for http requests
Description of IPMI configuration options
Configuration option = Default value Description
[ipmi]  
min_command_interval = 5 (Integer) Minimum time, in seconds, between IPMI operations sent to a server. There is a risk with some hardware that setting this too low may cause the BMC to crash. Recommended setting is 5 seconds.
retry_timeout = 60 (Integer) Maximum time in seconds to retry IPMI operations. There is a tradeoff when setting this value. Setting this too low may cause older BMCs to crash and require a hard reset. However, setting too high can cause the sync power state periodic task to hang when there are slow or unresponsive BMCs.
Description of iRMC configuration options
Configuration option = Default value Description
[irmc]  
auth_method = basic (String) Authentication method to be used for iRMC operations
client_timeout = 60 (Integer) Timeout (in seconds) for iRMC operations
port = 443 (Port number) Port to be used for iRMC operations
remote_image_server = None (String) IP of remote image server
remote_image_share_name = share (String) share name of remote_image_server
remote_image_share_root = /remote_image_share_root (String) Ironic conductor node’s “NFS” or “CIFS” root path
remote_image_share_type = CIFS (String) Share type of virtual media
remote_image_user_domain = (String) Domain name of remote_image_user_name
remote_image_user_name = None (String) User name of remote_image_server
remote_image_user_password = None (String) Password of remote_image_user_name
sensor_method = ipmitool (String) Sensor data retrieval method.
snmp_community = public (String) SNMP community. Required for versions “v1” and “v2c”
snmp_port = 161 (Port number) SNMP port
snmp_security = None (String) SNMP security name. Required for version “v3”
snmp_version = v2c (String) SNMP protocol version
Description of iSCSI configuration options
Configuration option = Default value Description
[iscsi]  
portal_port = 3260 (Port number) The port number on which the iSCSI portal listens for incoming connections.
Description of keystone configuration options
Configuration option = Default value Description
[keystone]  
region_name = None (String) The region used for getting endpoints of OpenStack services.
Description of metrics configuration options
Configuration option = Default value Description
[metrics]  
agent_backend = noop (String) Backend for the agent ramdisk to use for metrics. Default possible backends are “noop” and “statsd”.
agent_global_prefix = None (String) Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
agent_prepend_host = False (Boolean) Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
agent_prepend_host_reverse = True (Boolean) Split the prepended host value by ”.” and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names).
agent_prepend_uuid = False (Boolean) Prepend the node’s Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
backend = noop (String) Backend to use for the metrics system.
global_prefix = None (String) Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
prepend_host = False (Boolean) Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
prepend_host_reverse = True (Boolean) Split the prepended host value by ”.” and reverse it (to better match the reverse hierarchical form of domain names).
Description of metrics statsd configuration options
Configuration option = Default value Description
[metrics_statsd]  
agent_statsd_host = localhost (String) Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on.
agent_statsd_port = 8125 (Port number) Port for the agent ramdisk to use with the statsd backend.
statsd_host = localhost (String) Host for use with the statsd backend.
statsd_port = 8125 (Port number) Port to use with the statsd backend.
Description of neutron configuration options
Configuration option = Default value Description
[neutron]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_strategy = keystone (String) Authentication strategy to use when connecting to neutron. Running neutron in noauth mode (related to but not affected by this setting) is insecure and should only be used for testing.
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
cleaning_network_uuid = None (String) Neutron network UUID for the ramdisk to be booted into for cleaning nodes. Required for “neutron” network interface. It is also required if cleaning nodes when using “flat” network interface or “neutron” DHCP provider.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
port_setup_delay = 0 (Integer) Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port.
provisioning_network_uuid = None (String) Neutron network UUID for the ramdisk to be booted into for provisioning nodes. Required for “neutron” network interface.
retries = 3 (Integer) Client retries in the case of a failed request.
timeout = None (Integer) Timeout value for http requests
url = None (String) URL for connecting to neutron. Default value translates to ‘http://$my_ip:9696‘ when auth_strategy is ‘noauth’, and to discovery from Keystone catalog when auth_strategy is ‘keystone’.
url_timeout = 30 (Integer) Timeout value for connecting to neutron in seconds.
Description of PXE configuration options
Configuration option = Default value Description
[pxe]  
default_ephemeral_format = ext4 (String) Default file system format for ephemeral partition, if one is created.
image_cache_size = 20480 (Integer) Maximum size (in MiB) of cache for master images, including those in use.
image_cache_ttl = 10080 (Integer) Maximum TTL (in minutes) for old master images in cache.
images_path = /var/lib/ironic/images/ (String) On the ironic-conductor node, directory where images are stored on disk.
instance_master_path = /var/lib/ironic/master_images (String) On the ironic-conductor node, directory where master instance images are stored on disk. Setting to <None> disables image caching.
ip_version = 4 (String) The IP version that will be used for PXE booting. Defaults to 4. EXPERIMENTAL
ipxe_boot_script = $pybasedir/drivers/modules/boot.ipxe (String) On ironic-conductor node, the path to the main iPXE script file.
ipxe_enabled = False (Boolean) Enable iPXE boot.
ipxe_timeout = 0 (Integer) Timeout value (in seconds) for downloading an image via iPXE. Defaults to 0 (no timeout)
ipxe_use_swift = False (Boolean) Download deploy images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ‘ipxe_enabled’ option is set to true.
pxe_append_params = nofb nomodeset vga=normal (String) Additional append parameters for baremetal PXE boot.
pxe_bootfile_name = pxelinux.0 (String) Bootfile DHCP parameter.
pxe_config_template = $pybasedir/drivers/modules/pxe_config.template (String) On ironic-conductor node, template file for PXE configuration.
tftp_master_path = /tftpboot/master_images (String) On ironic-conductor node, directory where master TFTP images are stored on disk. Setting to <None> disables image caching.
tftp_root = /tftpboot (String) ironic-conductor node’s TFTP root path. The ironic-conductor must have read/write access to this path.
tftp_server = $my_ip (String) IP address of ironic-conductor node’s TFTP server.
uefi_pxe_bootfile_name = bootx64.efi (String) Bootfile DHCP parameter for UEFI boot mode.
uefi_pxe_config_template = $pybasedir/drivers/modules/pxe_grub_config.template (String) On ironic-conductor node, template file for PXE configuration for UEFI boot loader.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of SeaMicro configuration options
Configuration option = Default value Description
[seamicro]  
action_timeout = 10 (Integer) Seconds to wait for power action to be completed
max_retry = 3 (Integer) Maximum retries for SeaMicro operations
Description of service catalog configuration options
Configuration option = Default value Description
[service_catalog]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
timeout = None (Integer) Timeout value for http requests
Description of SNMP configuration options
Configuration option = Default value Description
[snmp]  
power_timeout = 10 (Integer) Seconds to wait for power action to be completed
reboot_delay = 0 (Integer) Time (in seconds) to sleep between when rebooting (powering off and on again)
Description of SSH configuration options
Configuration option = Default value Description
[ssh]  
get_vm_name_attempts = 3 (Integer) Number of attempts to try to get VM name used by the host that corresponds to a node’s MAC address.
get_vm_name_retry_interval = 3 (Integer) Number of seconds to wait between attempts to get VM name used by the host that corresponds to a node’s MAC address.
libvirt_uri = qemu:///system (String) libvirt URI.
Description of swift configuration options
Configuration option = Default value Description
[swift]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
swift_max_retries = 2 (Integer) Maximum number of times to retry a Swift request, before failing.
timeout = None (Integer) Timeout value for http requests
Description of VirtualBox configuration options
Configuration option = Default value Description
[virtualbox]  
port = 18083 (Port number) Port on which VirtualBox web service is listening.

New, updated, and deprecated options in Newton for Bare Metal service

New options
Option = default value (Type) Help string
[DEFAULT] default_network_interface = None (StrOpt) Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint.
[DEFAULT] enabled_network_interfaces = flat, noop (ListOpt) Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. This value must be the same on all ironic-conductor and ironic-api services, because it is used by ironic-api service to validate a new or updated node’s network_interface value.
[DEFAULT] notification_level = None (StrOpt) Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset.
[agent] deploy_logs_collect = on_failure (StrOpt) Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never.
[agent] deploy_logs_local_path = /var/log/ironic/deploy (StrOpt) The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to “local”.
[agent] deploy_logs_storage_backend = local (StrOpt) The name of the storage backend where the logs will be stored.
[agent] deploy_logs_swift_container = ironic_deploy_logs_container (StrOpt) The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to “swift”.
[agent] deploy_logs_swift_days_to_expire = 30 (IntOpt) Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to “swift”.
[api] ramdisk_heartbeat_timeout = 300 (IntOpt) Maximum interval (in seconds) for agent heartbeats.
[api] restrict_lookup = True (BoolOpt) Whether to restrict the lookup API to only nodes in certain states.
[audit] audit_map_file = /etc/ironic/ironic_api_audit_map.conf (StrOpt) Path to audit map file for ironic-api service. Used only when API audit is enabled.
[audit] enabled = False (BoolOpt) Enable auditing of API requests (for ironic-api service).
[audit] ignore_req_list = None (StrOpt) Comma separated list of Ironic REST API HTTP methods to be ignored during audit. For example: auditing will not be done on any GET or POST requests if this is set to “GET,POST”. It is used only when API audit is enabled.
[audit] namespace = openstack (StrOpt) namespace prefix for generated id
[audit_middleware_notifications] driver = None (StrOpt) The Driver to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop. If not specified, then value from oslo_messaging_notifications conf section is used.
[audit_middleware_notifications] topics = None (ListOpt) List of AMQP topics used for OpenStack notifications. If not specified, then value from oslo_messaging_notifications conf section is used.
[audit_middleware_notifications] transport_url = None (StrOpt) A URL representing messaging driver to use for notification. If not specified, we fall back to the same configuration used for RPC.
[deploy] continue_if_disk_secure_erase_fails = False (BoolOpt) Defines what to do if an ATA secure erase operation fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue.
[deploy] erase_devices_metadata_priority = None (IntOpt) Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning.
[deploy] power_off_after_deploy_failure = True (BoolOpt) Whether to power off a node after deploy failure. Defaults to True.
[deploy] shred_final_overwrite_with_zeros = True (BoolOpt) Whether to write zeros to a node’s block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True.
[deploy] shred_random_overwrite_iterations = 1 (IntOpt) During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1.
[drac] query_raid_config_job_status_interval = 120 (IntOpt) Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not.
[glance] auth_section = None (Opt) Config Section from which to load plugin specific options
[glance] auth_type = None (Opt) Authentication type to load
[glance] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[glance] certfile = None (StrOpt) PEM encoded client certificate cert file
[glance] insecure = False (BoolOpt) Verify HTTPS connections.
[glance] keyfile = None (StrOpt) PEM encoded client certificate key file
[glance] timeout = None (IntOpt) Timeout value for http requests
[ilo] ca_file = None (StrOpt) CA certificate file to validate iLO.
[ilo] default_boot_mode = auto (StrOpt) Default boot mode to be used in provisioning when “boot_mode” capability is not provided in the “properties/capabilities” of the node. The default is “auto” for backward compatibility. When “auto” is specified, default boot mode will be selected based on boot mode settings on the system.
[inspector] auth_section = None (Opt) Config Section from which to load plugin specific options
[inspector] auth_type = None (Opt) Authentication type to load
[inspector] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[inspector] certfile = None (StrOpt) PEM encoded client certificate cert file
[inspector] insecure = False (BoolOpt) Verify HTTPS connections.
[inspector] keyfile = None (StrOpt) PEM encoded client certificate key file
[inspector] timeout = None (IntOpt) Timeout value for http requests
[iscsi] portal_port = 3260 (PortOpt) The port number on which the iSCSI portal listens for incoming connections.
[metrics] agent_backend = noop (StrOpt) Backend for the agent ramdisk to use for metrics. Default possible backends are “noop” and “statsd”.
[metrics] agent_global_prefix = None (StrOpt) Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
[metrics] agent_prepend_host = False (BoolOpt) Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
[metrics] agent_prepend_host_reverse = True (BoolOpt) Split the prepended host value by ”.” and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names).
[metrics] agent_prepend_uuid = False (BoolOpt) Prepend the node’s Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name.
[metrics] backend = noop (StrOpt) Backend to use for the metrics system.
[metrics] global_prefix = None (StrOpt) Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
[metrics] prepend_host = False (BoolOpt) Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name.
[metrics] prepend_host_reverse = True (BoolOpt) Split the prepended host value by ”.” and reverse it (to better match the reverse hierarchical form of domain names).
[metrics_statsd] agent_statsd_host = localhost (StrOpt) Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on.
[metrics_statsd] agent_statsd_port = 8125 (PortOpt) Port for the agent ramdisk to use with the statsd backend.
[metrics_statsd] statsd_host = localhost (StrOpt) Host for use with the statsd backend.
[metrics_statsd] statsd_port = 8125 (PortOpt) Port to use with the statsd backend.
[neutron] auth_section = None (Opt) Config Section from which to load plugin specific options
[neutron] auth_type = None (Opt) Authentication type to load
[neutron] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[neutron] certfile = None (StrOpt) PEM encoded client certificate cert file
[neutron] insecure = False (BoolOpt) Verify HTTPS connections.
[neutron] keyfile = None (StrOpt) PEM encoded client certificate key file
[neutron] port_setup_delay = 0 (IntOpt) Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port.
[neutron] provisioning_network_uuid = None (StrOpt) Neutron network UUID for the ramdisk to be booted into for provisioning nodes. Required for “neutron” network interface.
[neutron] timeout = None (IntOpt) Timeout value for http requests
[oneview] enable_periodic_tasks = True (BoolOpt) Whether to enable the periodic tasks for OneView driver be aware when OneView hardware resources are taken and released by Ironic or OneView users and proactively manage nodes in clean fail state according to Dynamic Allocation model of hardware resources allocation in OneView.
[oneview] periodic_check_interval = 300 (IntOpt) Period (in seconds) for periodic tasks to be executed when enable_periodic_tasks=True.
[pxe] ipxe_use_swift = False (BoolOpt) Download deploy images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ‘ipxe_enabled’ option is set to true.
[service_catalog] auth_section = None (Opt) Config Section from which to load plugin specific options
[service_catalog] auth_type = None (Opt) Authentication type to load
[service_catalog] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[service_catalog] certfile = None (StrOpt) PEM encoded client certificate cert file
[service_catalog] insecure = False (BoolOpt) Verify HTTPS connections.
[service_catalog] keyfile = None (StrOpt) PEM encoded client certificate key file
[service_catalog] timeout = None (IntOpt) Timeout value for http requests
[swift] auth_section = None (Opt) Config Section from which to load plugin specific options
[swift] auth_type = None (Opt) Authentication type to load
[swift] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[swift] certfile = None (StrOpt) PEM encoded client certificate cert file
[swift] insecure = False (BoolOpt) Verify HTTPS connections.
[swift] keyfile = None (StrOpt) PEM encoded client certificate key file
[swift] timeout = None (IntOpt) Timeout value for http requests
New default values
Option Previous default value New default value
[DEFAULT] my_ip 10.0.0.1 127.0.0.1
[neutron] url http://$my_ip:9696 None
[pxe] uefi_pxe_bootfile_name elilo.efi bootx64.efi
[pxe] uefi_pxe_config_template $pybasedir/drivers/modules/elilo_efi_pxe_config.template $pybasedir/drivers/modules/pxe_grub_config.template
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None
[agent] heartbeat_timeout [api] ramdisk_heartbeat_timeout
[deploy] erase_devices_iterations [deploy] shred_random_overwrite_iterations
[keystone_authtoken] cafile [glance] cafile
[keystone_authtoken] cafile [neutron] cafile
[keystone_authtoken] cafile [service_catalog] cafile
[keystone_authtoken] cafile [swift] cafile
[keystone_authtoken] cafile [inspector] cafile
[keystone_authtoken] certfile [service_catalog] certfile
[keystone_authtoken] certfile [neutron] certfile
[keystone_authtoken] certfile [glance] certfile
[keystone_authtoken] certfile [inspector] certfile
[keystone_authtoken] certfile [swift] certfile
[keystone_authtoken] insecure [glance] insecure
[keystone_authtoken] insecure [inspector] insecure
[keystone_authtoken] insecure [swift] insecure
[keystone_authtoken] insecure [service_catalog] insecure
[keystone_authtoken] insecure [neutron] insecure
[keystone_authtoken] keyfile [inspector] keyfile
[keystone_authtoken] keyfile [swift] keyfile
[keystone_authtoken] keyfile [neutron] keyfile
[keystone_authtoken] keyfile [glance] keyfile
[keystone_authtoken] keyfile [service_catalog] keyfile

The Bare Metal service is capable of managing and provisioning physical machines. The configuration file of this module is /etc/ironic/ironic.conf.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Block Storage service

Introduction to the Block Storage service

The Block Storage service provides persistent block storage resources that Compute instances can consume. This includes secondary attached storage similar to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a Block Storage device for Compute to use as a bootable persistent instance.

The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage service does not provide a shared storage solution like NFS. With the Block Storage service, you can attach a device to only one instance.

The Block Storage service provides:

  • cinder-api - a WSGI app that authenticates and routes requests throughout the Block Storage service. It supports the OpenStack APIs only, although there is a translation that can be done through Compute’s EC2 interface, which calls in to the Block Storage client.
  • cinder-scheduler - schedules and routes requests to the appropriate volume service. Depending upon your configuration, this may be simple round-robin scheduling to the running volume services, or it can be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Volume Types, and Capabilities as well as custom filters.
  • cinder-volume - manages Block Storage devices, specifically the back-end devices themselves.
  • cinder-backup - provides a means to back up a Block Storage volume to OpenStack Object Storage (swift).

The Block Storage service contains the following components:

  • Back-end Storage Devices - the Block Storage service requires some form of back-end storage that the service is built on. The default implementation is to use LVM on a local volume group named “cinder-volumes.” In addition to the base driver implementation, the Block Storage service also provides the means to add support for other storage devices to be utilized such as external Raid Arrays or other storage appliances. These back-end storage devices may have custom block sizes when using KVM or QEMU as the hypervisor.

  • Users and Tenants (Projects) - the Block Storage service can be used by many different cloud computing consumers or customers (tenants on a shared system), using role-based access assignments. Roles control the actions that a user is allowed to perform. In the default configuration, most actions do not require a particular role, but this can be configured by the system administrator in the appropriate policy.json file that maintains the rules. A user’s access to particular volumes is limited by tenant, but the user name and password are assigned per user. Key pairs granting access to a volume are enabled per user, but quotas to control resource consumption across available hardware resources are per tenant.

    For tenants, quota controls are available to limit:

    • The number of volumes that can be created.
    • The number of snapshots that can be created.
    • The total number of GBs allowed per tenant (shared between snapshots and volumes).

    You can revise the default quota values with the Block Storage CLI, so the limits placed by quotas are editable by admin users.

  • Volumes, Snapshots, and Backups - the basic resources offered by the Block Storage service are volumes and snapshots which are derived from volumes and volume backups:

    • Volumes - allocated block storage resources that can be attached to instances as secondary storage or they can be used as the root store to boot instances. Volumes are persistent R/W block storage devices most commonly attached to the compute node through iSCSI.
    • Snapshots - a read-only point in time copy of a volume. The snapshot can be created from a volume that is currently in use (through the use of --force True) or in an available state. The snapshot can then be used to create a new volume through create from snapshot.
    • Backups - an archived copy of a volume currently stored in Object Storage (swift).

Volume drivers

Ceph RADOS Block Device (RBD)

If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.

Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.

_images/ceph-architecture.png

Ceph architecture

RADOS

Ceph is based on Reliable Autonomic Distributed Object Store (RADOS). RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:

Object Storage Device (OSD) Daemon
The storage daemon for the RADOS service, which interacts with the OSD (physical or logical storage unit for your data). You must run this daemon on each server in your cluster. For each OSD, you can have an associated hard drive disk. For performance purposes, pool your hard drive disk with raid arrays, logical volume management (LVM), or B-tree file system (Btrfs) pooling. By default, the following pools are created: data, metadata, and RBD.
Meta-Data Server (MDS)
Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a metadata server.
Monitor (MON)
A lightweight daemon that handles all communications with external applications and clients. It also provides a consensus for distributed decision making in a Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point to the address of a MON server. It checks the state and the consistency of the data. In an ideal setup, you must run at least three ceph-mon daemons on separate servers.

Ceph developers recommend XFS for production deployments, Btrfs for testing, development, and any non-critical deployments. Btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments.

Note

If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).

For more information about usable file systems, see ceph.com/ceph-storage/file-system/.

Ways to store, use, and expose data

To store and access your data, you can use the following storage systems:

RADOS
Use as an object, default storage mechanism.
RBD
Use as a block device. The Linux kernel RBD (RADOS block device) driver allows striping a Linux block device over multiple distributed object store data objects. It is compatible with the KVM RBD image.
CephFS
Use as a file, POSIX-compliant file system.

Ceph exposes RADOS; you can access it through the following interfaces:

RADOS Gateway
OpenStack Object Storage and Amazon-S3 compatible RESTful interface (see RADOS_Gateway).
librados
and its related C/C++ bindings
RBD and QEMU-RBD
Linux kernel and QEMU block devices that stripe data across multiple objects.
Driver options

The following table contains the configuration options supported by the Ceph RADOS Block Device driver.

Note

The volume_tmp_dir option has been deprecated and replaced by image_conversion_dir.

Description of Ceph storage configuration options
Configuration option = Default value Description
[DEFAULT]  
rados_connect_timeout = -1 (Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used.
rados_connection_interval = 5 (Integer) Interval value (in seconds) between connection retries to ceph cluster.
rados_connection_retries = 3 (Integer) Number of retries if connection to ceph cluster failed.
rbd_ceph_conf = (String) Path to the ceph configuration file
rbd_cluster_name = ceph (String) The name of ceph cluster
rbd_flatten_volume_from_snapshot = False (Boolean) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth = 5 (Integer) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning.
rbd_pool = rbd (String) The RADOS pool where rbd volumes are stored
rbd_secret_uuid = None (String) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size = 4 (Integer) Volumes will be chunked into objects of this size (in megabytes).
rbd_user = None (String) The RADOS client name for accessing rbd volumes - only set when using cephx authentication
volume_tmp_dir = None (String) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, please use image_conversion_dir instead.
GlusterFS driver

GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes and beyond in size. More information can be found on Gluster’s homepage.

This driver enables the use of GlusterFS in a similar fashion as NFS. It supports basic volume operations, including snapshot and clone.

To use Block Storage with GlusterFS, first set the volume_driver in the cinder.conf file:

volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver

The following table contains the configuration options supported by the GlusterFS driver.

Description of GlusterFS storage configuration options
Configuration option = Default value Description
[DEFAULT]  
glusterfs_mount_point_base = $state_path/mnt (String) Base dir containing mount points for gluster shares.
glusterfs_shares_config = /etc/cinder/glusterfs_shares (String) File with the list of available gluster shares
nas_volume_prov_type = thin (String) Provisioning type that will be used when creating volumes.
LVM

The default volume back end uses local volumes managed by LVM.

This driver supports different transport protocols to attach volumes, currently iSCSI and iSER.

Set the following in your cinder.conf configuration file, and use the following options to configure for iSCSI transport:

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi

Use the following options to configure for the iSER transport:

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iser
Description of LVM configuration options
Configuration option = Default value Description
[DEFAULT]  
lvm_conf_file = /etc/cinder/lvm.conf (String) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify ‘None’ to not use a conf file even if one exists).
lvm_max_over_subscription_ratio = 1.0 (Floating point) max_over_subscription_ratio setting for the LVM driver. If set, this takes precedence over the general max_over_subscription_ratio option. If None, the general option is used.
lvm_mirrors = 0 (Integer) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space
lvm_suppress_fd_warnings = False (Boolean) Suppress leaked file descriptor warnings in LVM commands.
lvm_type = default (String) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported.
volume_group = cinder-volumes (String) Name for the VG that will contain exported volumes

Caution

When extending an existing volume which has a linked snapshot, the related logical volume is deactivated. This logical volume is automatically reactivated unless auto_activation_volume_list is defined in LVM configuration file lvm.conf. See the lvm.conf file for more information.

If auto activated volumes are restricted, then include the cinder volume group into this list:

auto_activation_volume_list = [ "existingVG", "cinder-volumes" ]

This note does not apply for thinly provisioned volumes because they do not need to be deactivated.

NFS driver

The Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. An NFS server exports one or more of its file systems, known as shares. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local.

How the NFS driver works

The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.

The NFS driver does not actually allow an instance to access a storage device at the block level. Instead, files are created on an NFS share and mapped to instances, which emulates a block device. This works in a similar way to QEMU, which stores instances in the /var/lib/nova/instances directory.

How to use the NFS driver

Creating an NFS server is outside the scope of this document.

Configure with one NFS server

This example assumes access to the following NFS server and mount point:

  • 192.168.1.200:/storage

This example demonstrates the usage of this driver with one NFS server.

Set the nas_host option to the IP address or host name of your NFS server, and the nas_share_path option to the NFS export path:

nas_host = 192.168.1.200
nas_share_path = /storage
Configure with multiple NFS servers

Note

You can use the multiple NFS servers with cinder multi back ends feature. Configure the enabled_backends option with multiple values, and use the nas_host and nas_share options for each back end as described above.

The below example is another method to use multiple NFS servers, and demonstrates the usage of this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.

This example assumes access to the following NFS servers and mount points:

  • 192.168.1.200:/storage
  • 192.168.1.201:/storage
  • 192.168.1.202:/storage
  1. Add your list of NFS servers to the file you specified with the nfs_shares_config option. For example, if the value of this option was set to /etc/cinder/shares.txt file, then:

    # cat /etc/cinder/shares.txt
    192.168.1.200:/storage
    192.168.1.201:/storage
    192.168.1.202:/storage
    

    Comments are allowed in this file. They begin with a #.

  2. Configure the nfs_mount_point_base option. This is a directory where cinder-volume mounts all NFS shares stored in the shares.txt file. For this example, /var/lib/cinder/nfs is used. You can, of course, use the default value of $state_path/mnt.

  3. Start the cinder-volume service. /var/lib/cinder/nfs should now contain a directory for each NFS share specified in the shares.txt file. The name of each directory is a hashed name:

    # ls /var/lib/cinder/nfs/
    ...
    46c5db75dc3a3a50a10bfd1a456a9f3f
    ...
    
  4. You can now create volumes as you normally would:

    $ nova volume-create --display-name myvol 5
    # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
    volume-a8862558-e6d6-4648-b5df-bb84f31c8935
    

This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.

NFS driver notes
  • cinder-volume manages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have one cinder-volume service to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than one cinder-volume service is needed as well as potentially more than one NFS server.
  • Because data is stored in a file and not actually on a block storage device, you might not see the same IO performance as you would with a traditional block storage driver. Please test accordingly.
  • Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.

Note

Regular IO flushing and syncing still stands.

Sheepdog driver

Sheepdog is an open-source distributed storage system that provides a virtual storage pool utilizing internal disk of commodity servers.

Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshotting, cloning, rollback, and thin provisioning.

More information can be found on Sheepdog Project.

This driver enables the use of Sheepdog through Qemu/KVM.

Supported operations

Sheepdog driver supports these operations:

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
Configuration

Set the following option in the cinder.conf file:

volume_driver = cinder.volume.drivers.sheepdog.SheepdogDriver

The following table contains the configuration options supported by the Sheepdog driver:

Description of Sheepdog driver configuration options
Configuration option = Default value Description
[DEFAULT]  
sheepdog_store_address = 127.0.0.1 (String) IP address of sheep daemon.
sheepdog_store_port = 7000 (Port number) Port of sheep daemon.
SambaFS driver

There is a volume back-end for Samba filesystems. Set the following in your cinder.conf file, and use the following options to configure it.

Note

The SambaFS driver requires qemu-img version 1.7 or higher on Linux nodes, and qemu-img version 1.6 or higher on Windows nodes.

volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver
Description of Samba volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
smbfs_allocation_info_file_path = $state_path/allocation_data (String) The path of the automatically generated file containing information about volume disk space allocation.
smbfs_default_volume_format = qcow2 (String) Default format that will be used when creating volumes if no volume format is specified.
smbfs_mount_options = noperm,file_mode=0775,dir_mode=0775 (String) Mount options passed to the smbfs client. See mount.cifs man page for details.
smbfs_mount_point_base = $state_path/mnt (String) Base dir containing mount points for smbfs shares.
smbfs_oversub_ratio = 1.0 (Floating point) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid.
smbfs_shares_config = /etc/cinder/smbfs_shares (String) File with the list of available smbfs shares.
smbfs_sparsed_volumes = True (Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time.
smbfs_used_ratio = 0.95 (Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
Blockbridge EPS
Introduction

Blockbridge is software that transforms commodity infrastructure into secure multi-tenant storage that operates as a programmable service. It provides automatic encryption, secure deletion, quality of service (QoS), replication, and programmable security capabilities on your choice of hardware. Blockbridge uses micro-segmentation to provide isolation that allows you to concurrently operate OpenStack, Docker, and bare-metal workflows on shared resources. When used with OpenStack, isolated management domains are dynamically created on a per-project basis. All volumes and clones, within and between projects, are automatically cryptographically isolated and implement secure deletion.

Architecture reference

Blockbridge architecture

_images/bb-cinder-fig1.png
Control paths

The Blockbridge driver is packaged with the core distribution of OpenStack. Operationally, it executes in the context of the Block Storage service. The driver communicates with an OpenStack-specific API provided by the Blockbridge EPS platform. Blockbridge optionally communicates with Identity, Compute, and Block Storage services.

Block storage API

Blockbridge is API driven software-defined storage. The system implements a native HTTP API that is tailored to the specific needs of OpenStack. Each Block Storage service operation maps to a single back-end API request that provides ACID semantics. The API is specifically designed to reduce, if not eliminate, the possibility of inconsistencies between the Block Storage service and external storage infrastructure in the event of hardware, software or data center failure.

Extended management

OpenStack users may utilize Blockbridge interfaces to manage replication, auditing, statistics, and performance information on a per-project and per-volume basis. In addition, they can manage low-level data security functions including verification of data authenticity and encryption key delegation. Native integration with the Identity Service allows tenants to use a single set of credentials. Integration with Block storage and Compute services provides dynamic metadata mapping when using Blockbridge management APIs and tools.

Attribute-based provisioning

Blockbridge organizes resources using descriptive identifiers called attributes. Attributes are assigned by administrators of the infrastructure. They are used to describe the characteristics of storage in an application-friendly way. Applications construct queries that describe storage provisioning constraints and the Blockbridge storage stack assembles the resources as described.

Any given instance of a Blockbridge volume driver specifies a query for resources. For example, a query could specify '+ssd +10.0.0.0 +6nines -production iops.reserve=1000 capacity.reserve=30%'. This query is satisfied by selecting SSD resources, accessible on the 10.0.0.0 network, with high resiliency, for non-production workloads, with guaranteed IOPS of 1000 and a storage reservation for 30% of the volume capacity specified at create time. Queries and parameters are completely administrator defined: they reflect the layout, resource, and organizational goals of a specific deployment.

Supported operations
  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Get volume statistics
Supported protocols

Blockbridge provides iSCSI access to storage. A unique iSCSI data fabric is programmatically assembled when a volume is attached to an instance. A fabric is disassembled when a volume is detached from an instance. Each volume is an isolated SCSI device that supports persistent reservations.

Configuration steps
Create an authentication token

Whenever possible, avoid using password-based authentication. Even if you have created a role-restricted administrative user via Blockbridge, token-based authentication is preferred. You can generate persistent authentication tokens using the Blockbridge command-line tool as follows:

$ bb -H bb-mn authorization create --notes "OpenStack" --restrict none
Authenticating to https://bb-mn/api

Enter user or access token: system
Password for system:
Authenticated; token expires in 3599 seconds.

== Authorization: ATH4762894C40626410
notes                 OpenStack
serial                ATH4762894C40626410
account               system (ACT0762594C40626440)
user                  system (USR1B62094C40626440)
enabled               yes
created at            2015-10-24 22:08:48 +0000
access type           online
token suffix          xaKUy3gw
restrict              none

== Access Token
access token          1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw

*** Remember to record your access token!
Create volume type

Before configuring and enabling the Blockbridge volume driver, register an OpenStack volume type and associate it with a volume_backend_name. In this example, a volume type, ‘Production’, is associated with the volume_backend_name ‘blockbridge_prod’:

$ cinder type-create Production
$ cinder type-key Production volume_backend_name=blockbridge_prod
Specify volume driver

Configure the Blockbridge volume driver in /etc/cinder/cinder.conf. Your volume_backend_name must match the value specified in the cinder type-key command in the previous step.

volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_prod
Specify API endpoint and authentication

Configure the API endpoint and authentication. The following example uses an authentication token. You must create your own as described in Create an authentication token.

blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
Specify resource query

By default, a single pool is configured (implied) with a default resource query of '+openstack'. Within Blockbridge, datastore resources that advertise the ‘openstack’ attribute will be selected to fulfill OpenStack provisioning requests. If you prefer a more specific query, define a custom pool configuration.

blockbridge_pools = Production: +production +qos iops.reserve=5000

Pools support storage systems that offer multiple classes of service. You may wish to configure multiple pools to implement more sophisticated scheduling capabilities.

Configuration options
Description of BlockBridge EPS volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
blockbridge_api_host = None (String) IP address/hostname of Blockbridge API.
blockbridge_api_port = None (Integer) Override HTTPS port to connect to Blockbridge API server.
blockbridge_auth_password = None (String) Blockbridge API password (for auth scheme ‘password’)
blockbridge_auth_scheme = token (String) Blockbridge API authentication scheme (token or password)
blockbridge_auth_token = None (String) Blockbridge API token (for auth scheme ‘token’)
blockbridge_auth_user = None (String) Blockbridge API user (for auth scheme ‘password’)
blockbridge_default_pool = None (String) Default pool name if unspecified.
blockbridge_pools = {'OpenStack': '+openstack'} (Dict) Defines the set of exposed pools and their associated backend query strings
Configuration example

cinder.conf example file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
[Default]
enabled_backends = bb_devel bb_prod

[bb_prod]
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_prod
blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
blockbridge_pools = Production: +production +qos iops.reserve=5000

[bb_devel]
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_devel
blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
blockbridge_pools = Development: +development
Multiple volume types

Volume types are exposed to tenants, pools are not. To offer multiple classes of storage to OpenStack tenants, you should define multiple volume types. Simply repeat the process above for each desired type. Be sure to specify a unique volume_backend_name and pool configuration for each type. The cinder.conf example included with this documentation illustrates configuration of multiple types.

Testing resources

Blockbridge is freely available for testing purposes and deploys in seconds as a Docker container. This is the same container used to run continuous integration for OpenStack. For more information visit www.blockbridge.io.

CloudByte volume driver
CloudByte Block Storage driver configuration
Description of CloudByte volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
cb_account_name = None (String) CloudByte storage specific account name. This maps to a project name in OpenStack.
cb_add_qosgroup = {'latency': '15', 'iops': '10', 'graceallowed': 'false', 'iopscontrol': 'true', 'memlimit': '0', 'throughput': '0', 'tpcontrol': 'false', 'networkspeed': '0'} (Dict) These values will be used for CloudByte storage’s addQos API call.
cb_apikey = None (String) Driver will use this API key to authenticate against the CloudByte storage’s management interface.
cb_auth_group = None (String) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None.
cb_confirm_volume_create_retries = 3 (Integer) Will confirm a successful volume creation in CloudByte storage by making this many number of attempts.
cb_confirm_volume_create_retry_interval = 5 (Integer) A retry value in seconds. Will be used by the driver to check if volume creation was successful in CloudByte storage.
cb_confirm_volume_delete_retries = 3 (Integer) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts.
cb_confirm_volume_delete_retry_interval = 5 (Integer) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage.
cb_create_volume = {'compression': 'off', 'deduplication': 'off', 'blocklength': '512B', 'sync': 'always', 'protocoltype': 'ISCSI', 'recordsize': '16k'} (Dict) These values will be used for CloudByte storage’s createVolume API call.
cb_tsm_name = None (String) This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte storage. A volume will be created in this TSM.
cb_update_file_system = compression, sync, noofcopies, readonly (List) These values will be used for CloudByte storage’s updateFileSystem API call.
cb_update_qos_group = iops, latency, graceallowed (List) These values will be used for CloudByte storage’s updateQosGroup API call.
Coho Data volume driver

The Coho DataStream Scale-Out Storage allows your Block Storage service to scale seamlessly. The architecture consists of commodity storage servers with SDN ToR switches. Leveraging an SDN OpenFlow controller allows you to scale storage horizontally, while avoiding storage and network bottlenecks by intelligent load-balancing and parallelized workloads. High-performance PCIe NVMe flash, paired with traditional hard disk drives (HDD) or solid-state drives (SSD), delivers low-latency performance even with highly mixed workloads in large scale environment.

Coho Data’s storage features include real-time instance level granularity performance and capacity reporting via API or UI, and single-IP storage endpoint access.

Supported operations
  • Create, delete, attach, detach, retype, clone, and extend volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Create a thin provisioned volume.
  • Get volume statistics.
Coho Data QoS support

QoS support for the Coho Data driver includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage QoS specs extension module:

  • maxIOPS - The maximum number of IOPS allowed for this volume.
  • maxMBS - The maximum throughput allowed for this volume.

The QoS keys above must be created and associated with a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate

Note

If you change a volume type with QoS to a new volume type without QoS, the QoS configuration settings will be removed.

System requirements
  • NFS client on the Block storage controller.
Coho Data Block Storage driver configuration
  1. Create cinder volume type.

    $ cinder type-create coho-1
    
  2. Edit the OpenStack Block Storage service configuration file. The following sample, /etc/cinder/cinder.conf, configuration lists the relevant settings for a typical Block Storage service using a single Coho Data storage:

    [DEFAULT]
    enabled_backends = coho-1
    default_volume_type = coho-1
    
    [coho-1]
    volume_driver = cinder.volume.drivers.coho.CohoDriver
    volume_backend_name = coho-1
    nfs_shares_config = /etc/cinder/coho_shares
    nas_secure_file_operations = 'false'
    
  3. Add your list of Coho Datastream NFS addresses to the file you specified with the nfs_shares_config option. For example, if the value of this option was set to /etc/cinder/coho_shares, then:

    $ cat /etc/cinder/coho_shares
    <coho-nfs-ip>:/<export-path>
    
  4. Restart the cinder-volume service to enable Coho Data driver.

Description of Coho volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
coho_rpc_port = 2049 (Integer) RPC port to connect to Coho Data MicroArray
CoprHD FC, iSCSI, and ScaleIO drivers

CoprHD is an open source software-defined storage controller and API platform. It enables policy-based management and cloud automation of storage resources for block, object and file storage providers. For more details, see CoprHD.

EMC ViPR Controller is the commercial offering of CoprHD. These same volume drivers can also be considered as EMC ViPR Controller Block Storage drivers.

System requirements

CoprHD version 3.0 is required. Refer to the CoprHD documentation for installation and configuration instructions.

If you are using these drivers to integrate with EMC ViPR Controller, use EMC ViPR Controller 3.0.

Supported operations

The following operations are supported:

  • Create, delete, attach, detach, retype, clone, and extend volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Get volume statistics.
  • Create, delete, and update consistency groups.
  • Create and delete consistency group snapshots.
Driver options

The following table contains the configuration options specific to the CoprHD volume driver.

Description of Coprhd volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
coprhd_emulate_snapshot = False (Boolean) True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
coprhd_hostname = None (String) Hostname for the CoprHD Instance
coprhd_password = None (String) Password for accessing the CoprHD Instance
coprhd_port = 4443 (Port number) Port for the CoprHD Instance
coprhd_project = None (String) Project to utilize within the CoprHD Instance
coprhd_scaleio_rest_gateway_host = None (String) Rest Gateway IP or FQDN for Scaleio
coprhd_scaleio_rest_gateway_port = 4984 (Port number) Rest Gateway Port for Scaleio
coprhd_scaleio_rest_server_password = None (String) Rest Gateway Password
coprhd_scaleio_rest_server_username = None (String) Username for Rest Gateway
coprhd_tenant = None (String) Tenant to utilize within the CoprHD Instance
coprhd_username = None (String) Username for accessing the CoprHD Instance
coprhd_varray = None (String) Virtual Array to utilize within the CoprHD Instance
scaleio_server_certificate_path = None (String) Server certificate path
scaleio_verify_server_certificate = False (Boolean) verify server certificate
Preparation

This involves setting up the CoprHD environment first and then configuring the CoprHD Block Storage driver.

CoprHD

The CoprHD environment must meet specific configuration requirements to support the OpenStack Block Storage driver.

  • CoprHD users must be assigned a Tenant Administrator role or a Project Administrator role for the Project being used. CoprHD roles are configured by CoprHD Security Administrators. Consult the CoprHD documentation for details.
  • A CorprHD system administrator must execute the following configurations using the CoprHD UI, CoprHD API, or CoprHD CLI:
    • Create CoprHD virtual array
    • Create CoprHD virtual storage pool
    • Virtual Array designated for iSCSI driver must have an IP network created with appropriate IP storage ports
    • Designated tenant for use
    • Designated project for use

Note

Use each back end to manage one virtual array and one virtual storage pool. However, the user can have multiple instances of CoprHD Block Storage driver, sharing the same virtual array and virtual storage pool.

  • A typical CoprHD virtual storage pool will have the following values specified:
    • Storage Type: Block
    • Provisioning Type: Thin
    • Protocol: iSCSI/Fibre Channel(FC)/ScaleIO
    • Multi-Volume Consistency: DISABLED OR ENABLED
    • Maximum Native Snapshots: A value greater than 0 allows the OpenStack user to take Snapshots
CoprHD drivers - Single back end

cinder.conf

  1. Modify /etc/cinder/cinder.conf by adding the following lines, substituting values for your environment:

    [coprhd-iscsi]
    volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
    volume_backend_name = coprhd-iscsi
    coprhd_hostname = <CoprHD-Host-Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    coprhd_emulate_snapshot = True or False, True if the CoprHD vpool has VMAX or VPLEX as the backing storage
    
  2. If you use the ScaleIO back end, add the following lines:

    coprhd_scaleio_rest_gateway_host = <IP or FQDN>
    coprhd_scaleio_rest_gateway_port = 443
    coprhd_scaleio_rest_server_username = <username>
    coprhd_scaleio_rest_server_password = <password>
    scaleio_verify_server_certificate = True or False
    scaleio_server_certificate_path = <path-of-certificate-for-validation>
    
  3. Specify the driver using the enabled_backends parameter:

    enabled_backends = coprhd-iscsi
    

    Note

    To utilize the Fibre Channel driver, replace the volume_driver line above with:

    volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
    

    Note

    To utilize the ScaleIO driver, replace the volume_driver line above with:

    volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDScaleIODriver
    

    Note

    Set coprhd_emulate_snapshot to True if the CoprHD vpool has VMAX or VPLEX as the back-end storage. For these type of back-end storages, when a user tries to create a snapshot, an actual volume gets created in the back end.

  4. Modify the rpc_response_timeout value in /etc/cinder/cinder.conf to at least 5 minutes. If this entry does not already exist within the cinder.conf file, add it in the [DEFAULT] section:

    [DEFAULT]
    ...
    rpc_response_timeout = 300
    
  5. Now, restart the cinder-volume service.

Volume type creation and extra specs

  1. Create OpenStack volume types:

    $ openstack volume type create <typename>
    
  2. Map the OpenStack volume type to the CoprHD virtual pool:

    $ openstack volume type set <typename> --property CoprHD:VPOOL=<CoprHD-PoolName>
    
  3. Map the volume type created to appropriate back-end driver:

    $ openstack volume type set <typename> --property volume_backend_name=<VOLUME_BACKEND_DRIVER>
    
CoprHD drivers - Multiple back-ends

cinder.conf

  1. Add or modify the following entries if you are planning to use multiple back-end drivers:

    enabled_backends = coprhddriver-iscsi,coprhddriver-fc,coprhddriver-scaleio
    
  2. Add the following at the end of the file:

    [coprhddriver-iscsi]
    volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
    volume_backend_name = EMCCoprHDISCSIDriver
    coprhd_hostname = <CoprHD Host Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    
    
    [coprhddriver-fc]
    volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
    volume_backend_name = EMCCoprHHDFCDriver
    coprhd_hostname = <CoprHD Host Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    
    
    [coprhddriver-scaleio]
    volume_driver = cinder.volume.drivers.coprhd.scaleio.EMCCoprHDScaleIODriver
    volume_backend_name = EMCCoprHDScaleIODriver
    coprhd_hostname = <CoprHD Host Name>
    coprhd_port = 4443
    coprhd_username = <username>
    coprhd_password = <password>
    coprhd_tenant = <CoprHD-Tenant-Name>
    coprhd_project = <CoprHD-Project-Name>
    coprhd_varray = <CoprHD-Virtual-Array-Name>
    coprhd_scaleio_rest_gateway_host = <ScaleIO Rest Gateway>
    coprhd_scaleio_rest_gateway_port = 443
    coprhd_scaleio_rest_server_username = <rest gateway username>
    coprhd_scaleio_rest_server_password = <rest gateway password>
    scaleio_verify_server_certificate = True or False
    scaleio_server_certificate_path = <certificate path>
    
  3. Restart the cinder-volume service.

Volume type creation and extra specs

Setup the volume-types and volume-type to volume-backend association:

$ openstack volume type create "CoprHD High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property CoprHD:VPOOL="High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property volume_backend_name= EMCCoprHDISCSIDriver

$ openstack volume type create "CoprHD High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property CoprHD:VPOOL="High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property volume_backend_name= EMCCoprHDFCDriver

$ openstack volume type create "CoprHD performance SIO"
$ openstack volume type set "CoprHD performance SIO" --property CoprHD:VPOOL="Scaled Perf"
$ openstack volume type set "CoprHD performance SIO" --property volume_backend_name= EMCCoprHDScaleIODriver
ISCSI driver notes
  • The compute host must be added to the CoprHD along with its ISCSI initiator.
  • The ISCSI initiator must be associated with IP network on the CoprHD.
FC driver notes
  • The compute host must be attached to a VSAN or fabric discovered by CoprHD.
  • There is no need to perform any SAN zoning operations. CoprHD will perform the necessary operations automatically as part of the provisioning process.
ScaleIO driver notes
  • Install the ScaleIO SDC on the compute host.

  • The compute host must be added as the SDC to the ScaleIO MDS using the below commands:

    /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip List of MDM IPs
    (starting with primary MDM and separated by comma)
    Example:
    /opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip
    10.247.78.45,10.247.78.46,10.247.78.47
    

This step has to be repeated whenever the SDC (compute host in this case) is rebooted.

Consistency group configuration

To enable the support of consistency group and consistency group snapshot operations, use a text editor to edit the file /etc/cinder/policy.json and change the values of the below fields as specified. Upon editing the file, restart the c-api service:

"consistencygroup:create" : "",
"consistencygroup:delete": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:update": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
Names of resources in back-end storage

All the resources like volume, consistency group, snapshot, and consistency group snapshot will use the display name in OpenStack for naming in the back-end storage.

Datera drivers
Datera iSCSI driver

The Datera Elastic Data Fabric (EDF) is a scale-out storage software that turns standard, commodity hardware into a RESTful API-driven, intent-based policy controlled storage fabric for large-scale clouds. The Datera EDF integrates seamlessly with the Block Storage service. It provides storage through the iSCSI block protocol framework over the iSCSI block protocol. Datera supports all of the Block Storage services.

System requirements, prerequisites, and recommendations
Prerequisites
  • Must be running compatible versions of OpenStack and Datera EDF. Please visit here to determine the correct version.
  • All nodes must have access to Datera EDF through the iSCSI block protocol.
  • All nodes accessing the Datera EDF must have the following packages installed:
    • Linux I/O (LIO)
    • open-iscsi
    • open-iscsi-utils
    • wget
Description of Datera volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
datera_503_interval = 5 (Integer) Interval between 503 retries
datera_503_timeout = 120 (Integer) Timeout for HTTP 503 retry messages
datera_acl_allow_all = False (Boolean) DEPRECATED: True to set acl ‘allow_all’ on volumes created
datera_api_port = 7717 (String) Datera API port.
datera_api_version = 2 (String) Datera API version.
datera_debug = False (Boolean) True to set function arg and return logging
datera_debug_replica_count_override = False (Boolean) ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1
datera_num_replicas = 3 (Integer) DEPRECATED: Number of replicas to create of an inode.
Configuring the Datera volume driver

Modify the /etc/cinder/cinder.conf file for Block Storage service.

  • Enable the Datera volume driver:
[DEFAULT]
# ...
enabled_backends = datera
# ...
  • Optional. Designate Datera as the default back-end:
default_volume_type = datera
  • Create a new section for the Datera back-end definition. The san_ip can be either the Datera Management Network VIP or one of the Datera iSCSI Access Network VIPs depending on the network segregation requirements:
volume_driver = cinder.volume.drivers.datera.DateraDriver
san_ip = <IP_ADDR>            # The OOB Management IP of the cluster
san_login = admin             # Your cluster admin login
san_password = password       # Your cluster admin password
san_is_local = true
datera_num_replicas = 3       # Number of replicas to use for volume
Enable the Datera volume driver
  • Verify the OpenStack control node can reach the Datera san_ip:
$ ping -c 4 <san_IP>
  • Start the Block Storage service on all nodes running the cinder-volume services:
$ service cinder-volume restart

QoS support for the Datera drivers includes the ability to set the following capabilities in QoS Specs

  • read_iops_max – must be positive integer
  • write_iops_max – must be positive integer
  • total_iops_max – must be positive integer
  • read_bandwidth_max – in KB per second, must be positive integer
  • write_bandwidth_max – in KB per second, must be positive integer
  • total_bandwidth_max – in KB per second, must be positive integer
# Create qos spec
$ cinder qos-create DateraBronze total_iops_max=1000 \
  total_bandwidth_max=2000

# Associate qos-spec with volume type
$ cinder qos-associate <qos-spec-id> <volume-type-id>

# Add additional qos values or update existing ones
$ cinder qos-key <qos-spec-id> set read_bandwidth_max=500
Supported operations
  • Create, delete, attach, detach, manage, unmanage, and list volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Support for naming convention changes.
Configuring multipathing

The following configuration is for 3.X Linux kernels, some parameters in different Linux distributions may be different. Make the following changes in the multipath.conf file:

defaults {
checker_timer 5
}
devices {
    device {
        vendor "DATERA"
        product "IBLOCK"
        getuid_callout "/lib/udev/scsi_id --whitelisted –
        replace-whitespace --page=0x80 --device=/dev/%n"
        path_grouping_policy group_by_prio
        path_checker tur
        prio alua
        path_selector "queue-length 0"
        hardware_handler "1 alua"
        failback 5
    }
}
blacklist {
    device {
        vendor ".*"
        product ".*"
    }
}
blacklist_exceptions {
    device {
        vendor "DATERA.*"
        product "IBLOCK.*"
    }
}
Dell EqualLogic volume driver

The Dell EqualLogic volume driver interacts with configured EqualLogic arrays and supports various operations.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Clone a volume.
Configuration

The OpenStack Block Storage service supports:

  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and multiple pools on a single array.
  • Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or multiple pools on a single array.

The Dell EqualLogic volume driver’s ability to access the EqualLogic Group is dependent upon the generic block storage driver’s SSH settings in the /etc/cinder/cinder.conf file (see Block Storage service sample configuration files for reference).

Description of Dell EqualLogic volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
eqlx_chap_login = admin (String) Existing CHAP account name. Note that this option is deprecated in favour of “chap_username” as specified in cinder/volume/driver.py and will be removed in next release.
eqlx_chap_password = password (String) Password for specified CHAP account name. Note that this option is deprecated in favour of “chap_password” as specified in cinder/volume/driver.py and will be removed in the next release
eqlx_cli_max_retries = 5 (Integer) Maximum retry count for reconnection. Default is 5.
eqlx_cli_timeout = 30 (Integer) Timeout for the Group Manager cli command execution. Default is 30. Note that this option is deprecated in favour of “ssh_conn_timeout” as specified in cinder/volume/drivers/san/san.py and will be removed in M release.
eqlx_group_name = group-0 (String) Group name to use for creating volumes. Defaults to “group-0”.
eqlx_pool = default (String) Pool in which volumes will be created. Defaults to “default”.
eqlx_use_chap = False (Boolean) Use CHAP authentication for targets. Note that this option is deprecated in favour of “use_chap_auth” as specified in cinder/volume/driver.py and will be removed in next release.
Default (single-instance) configuration

The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:

[DEFAULT]
# Required settings

volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_ip = IP_EQLX
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

# Optional settings

san_thin_provision = true|false
eqlx_use_chap = true|false
eqlx_chap_login = EQLX_UNAME
eqlx_chap_password = EQLX_PW
eqlx_cli_max_retries = 5
san_ssh_port = 22
ssh_conn_timeout = 30
san_private_key = SAN_KEY_PATH
ssh_min_pool_conn = 1
ssh_max_pool_conn = 5

In this example, replace the following variables accordingly:

IP_EQLX
The IP address used to reach the Dell EqualLogic Group through SSH. This field has no default value.
SAN_UNAME
The user name to login to the Group manager via SSH at the san_ip. Default user name is grpadmin.
SAN_PW
The corresponding password of SAN_UNAME. Not used when san_private_key is set. Default password is password.
EQLX_GROUP
The group to be used for a pool where the Block Storage service will create volumes and snapshots. Default group is group-0.
EQLX_POOL
The pool where the Block Storage service will create volumes and snapshots. Default pool is default. This option cannot be used for multiple pools utilized by the Block Storage service on a single Dell EqualLogic Group.
EQLX_UNAME
The CHAP login account for each volume in a pool, if eqlx_use_chap is set to true. Default account name is chapadmin.
EQLX_PW
The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so you must set this password manually.
SAN_KEY_PATH (optional)
The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic Group. Not used when san_password is set. There is no default value.

In addition, enable thin provisioning for SAN volumes using the default san_thin_provision = true setting.

Multiple back-end configuration

The following example shows the typical configuration for a Block Storage service that uses two Dell EqualLogic back ends:

enabled_backends = backend1,backend2
san_ssh_port = 22
ssh_conn_timeout = 30
san_thin_provision = true

[backend1]
volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name = backend1
san_ip = IP_EQLX1
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

[backend2]
volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name = backend2
san_ip = IP_EQLX2
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL

In this example:

  • Thin provisioning for SAN volumes is enabled (san_thin_provision = true). This is recommended when setting up Dell EqualLogic back ends.
  • Each Dell EqualLogic back-end configuration ([backend1] and [backend2]) has the same required settings as a single back-end configuration, with the addition of volume_backend_name.
  • The san_ssh_port option is set to its default value, 22. This option sets the port used for SSH.
  • The ssh_conn_timeout option is also set to its default value, 30. This option sets the timeout in seconds for CLI commands over SSH.
  • The IP_EQLX1 and IP_EQLX2 refer to the IP addresses used to reach the Dell EqualLogic Group of backend1 and backend2 through SSH, respectively.

For information on configuring multiple back ends, see Configure a multiple-storage back end.

Dell Storage Center Fibre Channel and iSCSI drivers

The Dell Storage Center volume driver interacts with configured Storage Center arrays.

The Dell Storage Center driver manages Storage Center arrays through the Dell Storage Manager (DSM). DSM connection settings and Storage Center options are defined in the cinder.conf file.

Prerequisite: Dell Storage Manager 2015 R1 or later must be used.

Supported operations

The Dell Storage Center volume driver provides the following Cinder volume operations:

  • Create, delete, attach (map), and detach (unmap) volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Create, delete, list and update a consistency group.
  • Create, delete, and list consistency group snapshots.
  • Manage an existing volume.
  • Failover-host for replicated back ends.
  • Create a replication using Live Volume.
Extra spec options

Volume type extra specs can be used to enable a variety of Dell Storage Center options. Selecting Storage Profiles, Replay Profiles, enabling replication, replication options including Live Volume and Active Replay replication.

Storage Profiles control how Storage Center manages volume data. For a given volume, the selected Storage Profile dictates which disk tier accepts initial writes, as well as how data progression moves data between tiers to balance performance and cost. Predefined Storage Profiles are the most effective way to manage data in Storage Center.

By default, if no Storage Profile is specified in the volume extra specs, the default Storage Profile for the user account configured for the Block Storage driver is used. The extra spec key storagetype:storageprofile with the value of the name of the Storage Profile on the Storage Center can be set to allow to use Storage Profiles other than the default.

For ease of use from the command line, spaces in Storage Profile names are ignored. As an example, here is how to define two volume types using the High Priority and Low Priority Storage Profiles:

$ cinder type-create "GoldVolumeType"
$ cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$ cinder type-create "BronzeVolumeType"
$ cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority

Replay Profiles control how often the Storage Center takes a replay of a given volume and how long those replays are kept. The default profile is the daily profile that sets the replay to occur once a day and to persist for one week.

The extra spec key storagetype:replayprofiles with the value of the name of the Replay Profile or profiles on the Storage Center can be set to allow to use Replay Profiles other than the default daily profile.

As an example, here is how to define a volume type using the hourly Replay Profile and another specifying both hourly and the default daily profile:

$ cinder type-create "HourlyType"
$ cinder type-key "HourlyType" set storagetype:replayprofile=hourly
$ cinder type-create "HourlyAndDailyType"
$ cinder type-key "HourlyAndDailyType" set storagetype:replayprofiles=hourly,daily

Note the comma separated string for the HourlyAndDailyType.

Replication for a given volume type is enabled via the extra spec replication_enabled.

To create a volume type that specifies only replication enabled back ends:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'

Extra specs can be used to configure replication. In addition to the Replay Profiles above, replication:activereplay can be set to enable replication of the volume’s active replay. And the replication type can be changed to synchronous via the replication_type extra spec can be set.

To create a volume type that enables replication of the active replay:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication:activereplay='<is> True'

To create a volume type that enables synchronous replication :

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication_type='<in> sync'

To create a volume type that enables replication using Live Volume:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication:livevolume='<is> True'
iSCSI configuration

Use the following instructions to update the configuration file for iSCSI:

default_volume_type = delliscsi
enabled_backends = delliscsi

[delliscsi]
# Name to give this storage back-end
volume_backend_name = delliscsi
# The iSCSI driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
# IP address of DSM
san_ip = 172.23.8.101
# DSM user name
san_login = Admin
# DSM password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702

# ==Optional settings==

# The DSM API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
Fibre Channel configuration

Use the following instructions to update the configuration file for fibre channel:

default_volume_type = dellfc
enabled_backends = dellfc

[dellfc]
# Name to give this storage back-end
volume_backend_name = dellfc
# The FC driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver
# IP address of the DSM
san_ip = 172.23.8.101
# DSM user name
san_login = Admin
# DSM password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702

# ==Optional settings==

# The DSM API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
Dual DSM

It is possible to specify a secondary DSM to use in case the primary DSM fails.

Configuration is done through the cinder.conf. Both DSMs have to be configured to manage the same set of Storage Centers for this backend. That means the dell_sc_ssn and any Storage Centers used for replication or Live Volume.

Add network and credential information to the backend to enable Dual DSM.

[dell]
# The IP address and port of the secondary DSM.
secondary_san_ip = 192.168.0.102
secondary_sc_api_port = 3033
# Specify credentials for the secondary DSM.
secondary_san_login = Admin
secondary_san_password = secret

The driver will use the primary until a failure. At that point it will attempt to use the secondary. It will continue to use the secondary until the volume service is restarted or the secondary fails at which point it will attempt to use the primary.

Replication configuration

Add the following to the back-end specification to specify another Storage Center to replicate to.

[dell]
replication_device = target_device_id: 65495, qosnode: cinderqos

The target_device_id is the SSN of the remote Storage Center and the qosnode is the QoS Node setup between the two Storage Centers.

Note that more than one replication_device line can be added. This will slow things down, however.

A volume is only replicated if the volume is of a volume-type that has the extra spec replication_enabled set to <is> True.

Replication notes

This driver supports both standard replication and Live Volume (if supported and licensed). The main difference is that a VM attached to a Live Volume is mapped to both Storage Centers. In the case of a failure of the primary Live Volume still requires a failover-host to move control of the volume to the second controller.

Existing mappings should work and not require the instance to be remapped but it might need to be rebooted.

Live Volume is more resource intensive than replication. One should be sure to plan accordingly.

Failback

The failover-host command is designed for the case where the primary system is not coming back. If it has been executed and the primary has been restored it is possible to attempt a failback.

Simply specify default as the backend_id.

$ cinder failover-host cinder@delliscsi --backend_id default

Non trivial heavy lifting is done by this command. It attempts to recover best it can but if things have diverged to far it can only do so much. It is also a one time only command so do not reboot or restart the service in the middle of it.

Failover and failback are significant operations under OpenStack Cinder. Be sure to consult with support before attempting.

Server type configuration

This option allows one to set a default Server OS type to use when creating a server definition on the Dell Storage Center.

When attaching a volume to a node the Dell Storage Center driver creates a server definition on the storage array. This defition includes a Server OS type. The type used by the Dell Storage Center cinder driver is “Red Hat Linux 6.x”. This is a modern operating system definition that supports all the features of an OpenStack node.

Add the following to the back-end specification to specify the Server OS to use when creating a server definition. The server type used must come from the drop down list in the DSM.

[dell]
default_server_os = 'Red Hat Linux 7.x'

Note that this server definition is created once. Changing this setting after the fact will not change an existing definition. The selected Server OS does not have to match the actual OS used on the node.

Driver options

The following table contains the configuration options specific to the Dell Storage Center volume driver.

Description of Dell Storage Center volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
dell_sc_api_port = 3033 (Port number) Dell API port
dell_sc_server_folder = openstack (String) Name of the server folder to use on the Storage Center
dell_sc_ssn = 64702 (Integer) Storage Center System Serial Number
dell_sc_verify_cert = False (Boolean) Enable HTTPS SC certificate verification
dell_sc_volume_folder = openstack (String) Name of the volume folder to use on the Storage Center
dell_server_os = Red Hat Linux 6.x (String) Server OS type to use when creating a new server on the Storage Center.
excluded_domain_ip = None (Unknown) Domain IP to be excluded from iSCSI returns.
secondary_san_ip = (String) IP address of secondary DSM controller
secondary_san_login = Admin (String) Secondary DSM user name
secondary_san_password = (String) Secondary DSM user password name
secondary_sc_api_port = 3033 (Port number) Secondary Dell API port
Dot Hill AssuredSAN Fibre Channel and iSCSI drivers

The DotHillFCDriver and DotHillISCSIDriver volume drivers allow Dot Hill arrays to be used for block storage in OpenStack deployments.

System requirements

To use the Dot Hill drivers, the following are required:

  • Dot Hill AssuredSAN array with:
    • iSCSI or FC host interfaces
    • G22x firmware or later
    • Appropriate licenses for the snapshot and copy volume features
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
Configuring the array
  1. Verify that the array can be managed via an HTTPS connection. HTTP can also be used if dothill_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

    If you plan to use vdisks instead of virtual pools, create or identify one or more vdisks to be used for OpenStack storage; typically this will mean creating or setting aside one disk group for each of the A and B controllers.

  2. Edit the cinder.conf file to define an storage back-end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in key=value format.

    • The dothill_backend_name value specifies the name of the storage pool or vdisk on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that is shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: the appropriate Cinder driver name; IP address or hostname of the array management interface; the username and password of an array user account with manage privileges; and the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name is used so that a single volume type definition can be used to allocate volumes from both pools.

    iSCSI example back-end entries

    [pool-a]
    dothill_backend_name = A
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    dothill_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    dothill_backend_name = B
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    dothill_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Fibre Channel example back-end entries

    [pool-a]
    dothill_backend_name = A
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
    [pool-b]
    dothill_backend_name = B
    volume_backend_name = dothill-array
    volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
  3. If any volume_backend_name value refers to a vdisk rather than a virtual pool, add an additional statement dothill_backend_type = linear to that back-end entry.

  4. If HTTPS is not enabled in the array, include dothill_api_protocol = http in each of the back-end definitions.

  5. If HTTPS is enabled, you can enable certificate verification with the option dothill_verify_certificate=True. You may also use the dothill_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  6. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_backends parameter specifying the back-end entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example of [DEFAULT] section changes

    [DEFAULT]
      ...
    enabled_backends = pool-a,pool-b
    default_volume_type = dothill
      ...
    
  7. Create a new volume type for each distinct volume_backend_name value that you added to cinder.conf. The example below assumes that the same volume_backend_name=dothill-array option was specified in all of the entries, and specifies that the volume type dothill can be used to allocate volumes from any of them.

    Example of creating a volume type

    $ cinder type-create dothill
    
    $ cinder type-key dothill set volume_backend_name=dothill-array
    
  8. After modifying cinder.conf, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the Dot Hill drivers.

Description of Dot Hill volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
dothill_api_protocol = https (String) DotHill API interface protocol.
dothill_backend_name = A (String) Pool or Vdisk name to use for volume creation.
dothill_backend_type = virtual (String) linear (for Vdisk) or virtual (for Pool).
dothill_iscsi_ips = (List) List of comma-separated target iSCSI IP addresses.
dothill_verify_certificate = False (Boolean) Whether to verify DotHill array SSL certificate.
dothill_verify_certificate_path = None (String) DotHill array SSL certificate path.
EMC ScaleIO Block Storage driver configuration

ScaleIO is a software-only solution that uses existing servers’ local disks and LAN to create a virtual SAN that has all of the benefits of external storage, but at a fraction of the cost and complexity. Using the driver, Block Storage hosts can connect to a ScaleIO Storage cluster.

This section explains how to configure and connect the block storage nodes to a ScaleIO storage cluster.

Support matrix
ScaleIO version Supported Linux operating systems
1.32 CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12
2.0 CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12, Ubuntu 14.04
Deployment prerequisites
  • ScaleIO Gateway must be installed and accessible in the network. For installation steps, refer to the Preparing the installation Manager and the Gateway section in ScaleIO Deployment Guide. See Official documentation.
  • ScaleIO Data Client (SDC) must be installed on all OpenStack nodes.

Note

Ubuntu users must follow the specific instructions in the ScaleIO deployment guide for Ubuntu environments. See the Deploying on Ubuntu servers section in ScaleIO Deployment Guide. See Official documentation.

Official documentation

To find the ScaleIO documentation:

  1. Go to the ScaleIO product documentation page.
  2. From the left-side panel, select the relevant version (1.32 or 2.0).
  3. Search for “ScaleIO Installation Guide 1.32” or “ScaleIO 2.0 Deployment Guide” accordingly.
Supported operations
  • Create, delete, clone, attach, detach, manage, and unmanage volumes
  • Create, delete, manage, and unmanage volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Get volume statistics
  • Create, list, update, and delete consistency groups
  • Create, list, update, and delete consistency group snapshots
ScaleIO QoS support

QoS support for the ScaleIO driver includes the ability to set the following capabilities in the Block Storage API cinder.api.contrib.qos_specs_manage QoS specs extension module:

  • maxIOPS
  • maxIOPSperGB
  • maxBWS
  • maxBWSperGB

The QoS keys above must be created and associated with a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate
maxIOPS
The QoS I/O rate limit. If not set, the I/O rate will be unlimited. The setting must be larger than 10.
maxIOPSperGB
The QoS I/O rate limit. The limit will be calculated by the specified value multiplied by the volume size. The setting must be larger than 10.
maxBWS
The QoS I/O bandwidth rate limit in KBs. If not set, the I/O bandwidth rate will be unlimited. The setting must be a multiple of 1024.
maxBWSperGB
The QoS I/O bandwidth rate limit in KBs. The limit will be calculated by the specified value multiplied by the volume size. The setting must be a multiple of 1024.

The driver always chooses the minimum between the QoS keys value and the relevant calculated value of maxIOPSperGB or maxBWSperGB.

Since the limits are per SDC, they will be applied after the volume is attached to an instance, and thus to a compute node/SDC.

ScaleIO thin provisioning support

The Block Storage driver supports creation of thin-provisioned and thick-provisioned volumes. The provisioning type settings can be added as an extra specification of the volume type, as follows:

provisioning:type = thin\thick

The old specification: sio:provisioning_type is deprecated.

Oversubscription

Configure the oversubscription ratio by adding the following parameter under the seperate section for ScaleIO:

sio_max_over_subscription_ratio = OVER_SUBSCRIPTION_RATIO

Note

The default value for sio_max_over_subscription_ratio is 10.0.

Oversubscription is calculated correctly by the Block Storage service only if the extra specification provisioning:type appears in the volume type regardless to the default provisioning type. Maximum oversubscription value supported for ScaleIO is 10.0.

Default provisioning type

If provisioning type settings are not specified in the volume type, the default value is set according to the san_thin_provision option in the configuration file. The default provisioning type will be thin if the option is not specified in the configuration file. To set the default provisioning type thick, set the san_thin_provision option to false in the configuration file, as follows:

san_thin_provision = false

The configuration file is usually located in /etc/cinder/cinder.conf. For a configuration example, see: cinder.conf.

ScaleIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end, or under a separate section in case of multiple back ends (for example [ScaleIO]). The configuration file is usually located at /etc/cinder/cinder.conf.

For a configuration example, refer to the example cinder.conf .

ScaleIO driver name

Configure the driver name by adding the following parameter:

volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
ScaleIO MDM server IP

The ScaleIO Meta Data Manager monitors and maintains the available resources and permissions.

To retrieve the MDM server IP address, use the drv_cfg --query_mdms command.

Configure the MDM server IP address by adding the following parameter:

san_ip = ScaleIO GATEWAY IP
ScaleIO Protection Domain name

ScaleIO allows multiple Protection Domains (groups of SDSs that provide backup for each other).

To retrieve the available Protection Domains, use the command scli --query_all and search for the Protection Domains section.

Configure the Protection Domain for newly created volumes by adding the following parameter:

sio_protection_domain_name = ScaleIO Protection Domain
ScaleIO Storage Pool name

A ScaleIO Storage Pool is a set of physical devices in a Protection Domain.

To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.

Configure the Storage Pool for newly created volumes by adding the following parameter:

sio_storage_pool_name = ScaleIO Storage Pool
ScaleIO Storage Pools

Multiple Storage Pools and Protection Domains can be listed for use by the virtual machines.

To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.

Configure the available Storage Pools by adding the following parameter:

sio_storage_pools = Comma-separated list of protection domain:storage pool name
ScaleIO user credentials

Block Storage requires a ScaleIO user with administrative privileges. ScaleIO recommends creating a dedicated OpenStack user account that has an administrative user role.

Refer to the ScaleIO User Guide for details on user account management.

Configure the user credentials by adding the following parameters:

san_login = ScaleIO username

san_password = ScaleIO password
Multiple back ends

Configuring multiple storage back ends allows you to create several back-end storage solutions that serve the same Compute resources.

When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

Configuration example

cinder.conf example file

You can update the cinder.conf file by editing the necessary parameters as follows:

[Default]
enabled_backends = scaleio

[scaleio]
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
volume_backend_name = scaleio
san_ip = GATEWAY_IP
sio_protection_domain_name = Default_domain
sio_storage_pool_name = Default_pool
sio_storage_pools = Domain1:Pool1,Domain2:Pool2
san_login = SIO_USER
san_password = SIO_PASSWD
san_thin_provision = false
Configuration options

The ScaleIO driver supports these configuration options:

Description of EMC SIO volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
sio_max_over_subscription_ratio = 10.0 (Floating point) max_over_subscription_ratio setting for the ScaleIO driver. This replaces the general max_over_subscription_ratio which has no effect in this driver.Maximum value allowed for ScaleIO is 10.0.
sio_protection_domain_id = None (String) Protection Domain ID.
sio_protection_domain_name = None (String) Protection Domain name.
sio_rest_server_port = 443 (String) REST server port.
sio_round_volume_capacity = True (Boolean) Round up volume capacity.
sio_server_certificate_path = None (String) Server certificate path.
sio_storage_pool_id = None (String) Storage Pool ID.
sio_storage_pool_name = None (String) Storage Pool name.
sio_storage_pools = None (String) Storage Pools.
sio_unmap_volume_before_deletion = False (Boolean) Unmap volume before deletion.
sio_verify_server_certificate = False (Boolean) Verify server certificate.
EMC VMAX iSCSI and FC drivers

The EMC VMAX drivers, EMCVMAXISCSIDriver and EMCVMAXFCDriver, support the use of EMC VMAX storage arrays with Block Storage. They both provide equivalent functions and differ only in support for their respective host attachment methods.

The drivers perform volume operations by communicating with the back-end VMAX storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.

The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back end for VMAX storage operations.

The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system.

System requirements

The Cinder driver supports both VMAX-2 and VMAX-3 series.

For VMAX-2 series, SMI-S version V4.6.2.29 (Solutions Enabler 7.6.2.67) or Solutions Enabler 8.1.2 is required.

For VMAX-3 series, Solutions Enabler 8.3 is required. This is SSL only. Refer to section below SSL support.

When installing Solutions Enabler, make sure you explicitly add the SMI-S component.

You can download SMI-S from the EMC’s support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.

Ensure that there is only one SMI-S (ECOM) server active on the same VMAX array.

Required VMAX software suites for OpenStack

There are five Software Suites available for the VMAX All Flash and Hybrid:

  • Base Suite
  • Advanced Suite
  • Local Replication Suite
  • Remote Replication Suite
  • Total Productivity Pack

Openstack requires the Advanced Suite and the Local Replication Suite or the Total Productivity Pack (it includes the Advanced Suite and the Local Replication Suite) for the VMAX All Flash and Hybrid.

There are four bundled Software Suites for the VMAX2:

  • Advanced Software Suite
  • Base Software Suite
  • Enginuity Suite
  • Symmetrix Management Suite

OpenStack requires the Advanced Software Bundle for the VMAX2.

or

The VMAX2 Optional Software are:

  • EMC Storage Analytics (ESA)
  • FAST VP
  • Ionix ControlCenter and ProSphere Package
  • Open Replicator for Symmetrix
  • PowerPath
  • RecoverPoint EX
  • SRDF for VMAX 10K
  • Storage Configuration Advisor
  • TimeFinder for VMAX10K

OpenStack requires TimeFinder for VMAX10K for the VMAX2.

Each are licensed separately. For further details on how to get the relevant license(s), reference eLicensing Support below.

eLicensing support

To activate your entitlements and obtain your VMAX license files, visit the Service Center on https://support.emc.com, as directed on your License Authorization Code (LAC) letter emailed to you.

  • For help with missing or incorrect entitlements after activation (that is, expected functionality remains unavailable because it is not licensed), contact your EMC account representative or authorized reseller.

  • For help with any errors applying license files through Solutions Enabler, contact the EMC Customer Support Center.

  • If you are missing a LAC letter or require further instructions on activating your licenses through the Online Support site, contact EMC’s worldwide Licensing team at licensing@emc.com or call:

    North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.

    EMEA: +353 (0) 21 4879862 and follow the voice prompts.

Supported operations

VMAX drivers support these operations:

  • Create, list, delete, attach, and detach volumes
  • Create, list, and delete volume snapshots
  • Copy an image to a volume
  • Copy a volume to an image
  • Clone a volume
  • Extend a volume
  • Retype a volume (Host assisted volume migration only)
  • Create a volume from a snapshot
  • Create and delete consistency group
  • Create and delete consistency group snapshot
  • Modify consistency group (add/remove volumes)
  • Create consistency group from source (source can only be a CG snapshot)

VMAX drivers also support the following features:

  • Dynamic masking view creation
  • Dynamic determination of the target iSCSI IP address
  • iSCSI multipath support
  • Oversubscription
  • Live Migration

VMAX2:

  • FAST automated storage tiering policy
  • Striped volume creation

VMAX All Flash and Hybrid:

  • Service Level support
  • SnapVX support
  • All Flash support

Note

VMAX All Flash array with Solutions Enabler 8.3 have compression enabled by default when associated with Diamond Service Level. This means volumes added to any newly created storage groups will be compressed.

Setup VMAX drivers
Pywbem Versions
Pywbem Version Ubuntu14.04(LTS),Ubuntu16.04(LTS), Red Hat Enterprise Linux, CentOS and Fedora
  Python2 Python3
pip Native pip Native
0.9.0 No N/A Yes N/A
0.8.4 No N/A Yes N/A
0.7.0 No Yes No Yes

Note

On Python2, use the updated distro version, for example:

# apt-get install python-pywbem

Note

On Python3, use the official pywbem version (V0.9.0 or v0.8.4).

  1. Install the python-pywbem package for your distribution.

    • On Ubuntu:

      # apt-get install python-pywbem
      
    • On openSUSE:

      # zypper install python-pywbem
      
    • On Red Hat Enterprise Linux, CentOS, and Fedora:

      # yum install pywbem
      
  2. Install iSCSI Utilities (for iSCSI drivers only).

    1. Download and configure the Cinder node as an iSCSI initiator.

    2. Install the open-iscsi package.

      • On Ubuntu:

        # apt-get install open-iscsi
        
      • On openSUSE:

        # zypper install open-iscsi
        
      • On Red Hat Enterprise Linux, CentOS, and Fedora:

        # yum install scsi-target-utils.x86_64
        
    3. Enable the iSCSI driver to start automatically.

  3. Download SMI-S from support.emc.com and install it. Add your VMAX arrays to SMI-S.

    You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.

    Note

    You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.

    SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe for windows and ./TestSmiProvider for linux

    Use addsys in TestSmiProvider to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.

  4. Configure Block Storage

    Add the following entries to /etc/cinder/cinder.conf:

    enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC
    
    [CONF_GROUP_ISCSI]
    volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
    cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
    volume_backend_name = ISCSI_backend
    
    [CONF_GROUP_FC]
    volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
    cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
    volume_backend_name = FC_backend
    

    In this example, two back-end configuration groups are enabled: CONF_GROUP_ISCSI and CONF_GROUP_FC. Each configuration group has a section describing unique parameters for connections, drivers, the volume_backend_name, and the name of the EMC-specific configuration file containing additional settings. Note that the file name is in the format /etc/cinder/cinder_emc_config_[confGroup].xml.

    Once the cinder.conf and EMC-specific configuration files have been created, cinder commands need to be issued in order to create and associate OpenStack volume types with the declared volume_backend_names:

    $ cinder type-create VMAX_ISCSI
    $ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
    $ cinder type-create VMAX_FC
    $ cinder type-key VMAX_FC set volume_backend_name=FC_backend
    

    By issuing these commands, the Block Storage volume type VMAX_ISCSI is associated with the ISCSI_backend, and the type VMAX_FC is associated with the FC_backend.

    Create the /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml file. You do not need to restart the service for this change.

    Add the following lines to the XML file:

    VMAX2
    <?xml version="1.0" encoding="UTF-8" ?>
    <EMC>
      <EcomServerIp>1.1.1.1</EcomServerIp>
      <EcomServerPort>00</EcomServerPort>
      <EcomUserName>user1</EcomUserName>
      <EcomPassword>password1</EcomPassword>
      <PortGroups>
        <PortGroup>OS-PORTGROUP1-PG</PortGroup>
        <PortGroup>OS-PORTGROUP2-PG</PortGroup>
      </PortGroups>
      <Array>111111111111</Array>
      <Pool>FC_GOLD1</Pool>
      <FastPolicy>GOLD1</FastPolicy>
    </EMC>
    
    VMAX All Flash and Hybrid
    <?xml version="1.0" encoding="UTF-8" ?>
    <EMC>
      <EcomServerIp>1.1.1.1</EcomServerIp>
      <EcomServerPort>00</EcomServerPort>
      <EcomUserName>user1</EcomUserName>
      <EcomPassword>password1</EcomPassword>
      <PortGroups>
        <PortGroup>OS-PORTGROUP1-PG</PortGroup>
        <PortGroup>OS-PORTGROUP2-PG</PortGroup>
      </PortGroups>
      <Array>111111111111</Array>
      <Pool>SRP_1</Pool>
      <SLO>Gold</SLO>
      <Workload>OLTP</Workload>
    </EMC>
    

    Where:

EcomServerIp
IP address of the ECOM server which is packaged with SMI-S.
EcomServerPort
Port number of the ECOM server which is packaged with SMI-S.
EcomUserName and EcomPassword
Cedentials for the ECOM server.
PortGroups
Supplies the names of VMAX port groups that have been pre-configured to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can contain one or more port groups of either iSCSI or FC ports. When a dynamic masking view is created by the VMAX driver, the port group is chosen randomly from the PortGroup list, to evenly distribute load across the set of groups provided. Make sure that the PortGroups set contains either all FC or all iSCSI port groups (for a given back end), as appropriate for the configured driver (iSCSI or FC).
Array
Unique VMAX array serial number.
Pool
Unique pool name within a given array. For back ends not using FAST automated tiering, the pool is a single pool that has been created by the administrator. For back ends exposing FAST policy automated tiering, the pool is the bind pool to be used with the FAST policy.
FastPolicy
VMAX2 only. Name of the FAST Policy to be used. By including this tag, volumes managed by this back end are treated as under FAST control. Omitting the FastPolicy tag means FAST is not enabled on the provided storage pool.
SLO
VMAX All Flash and Hybrid only. The Service Level Objective (SLO) that manages the underlying storage to provide expected performance. Omitting the SLO tag means that non FAST storage groups will be created instead (storage groups not associated with any service level).
Workload
VMAX All Flash and Hybrid only. When a workload type is added, the latency range is reduced due to the added information. Omitting the Workload tag means the latency range will be the widest for its SLO type.
FC Zoning with VMAX

Zone Manager is required when there is a fabric between the host and array. This is necessary for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns.

iSCSI with VMAX
  • Make sure the iscsi-initiator-utils package is installed on all Compute nodes.

Note

You can only ping the VMAX iSCSI target ports when there is a valid masking view. An attach operation creates this masking view.

VMAX masking view and group naming info
Masking view names

Masking views are dynamically created by the VMAX FC and iSCSI drivers using the following naming conventions. [protocol] is either I for volumes attached over iSCSI or F for volumes attached over Fiber Channel.

VMAX2

OS-[shortHostName]-[poolName]-[protocol]-MV

VMAX2 (where FAST policy is used)

OS-[shortHostName]-[fastPolicy]-[protocol]-MV

VMAX All Flash and Hybrid

OS-[shortHostName]-[SRP]-[SLO]-[workload]-[protocol]-MV
Initiator group names

For each host that is attached to VMAX volumes using the drivers, an initiator group is created or re-used (per attachment type). All initiators of the appropriate type known for that host are included in the group. At each new attach volume operation, the VMAX driver retrieves the initiators (either WWNNs or IQNs) from OpenStack and adds or updates the contents of the Initiator Group as required. Names are of the following format. [protocol] is either I for volumes attached over iSCSI or F for volumes attached over Fiber Channel.

OS-[shortHostName]-[protocol]-IG

Note

Hosts attaching to OpenStack managed VMAX storage cannot also attach to storage on the same VMAX that are not managed by OpenStack.

FA port groups

VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file.

Storage group names

As volumes are attached to a host, they are either added to an existing storage group (if it exists) or a new storage group is created and the volume is then added. Storage groups contain volumes created from a pool (either single-pool or FAST-controlled), attached to a single host, over a single connection type (iSCSI or FC). [protocol] is either I for volumes attached over iSCSI or F for volumes attached over Fiber Channel.

VMAX2

OS-[shortHostName]-[poolName]-[protocol]-SG

VMAX2 (where FAST policy is used)

OS-[shortHostName]-[fastPolicy]-[protocol]-SG

VMAX All Flash and Hybrid

OS-[shortHostName]-[SRP]-[SLO]-[Workload]-[protocol]-SG
VMAX2 concatenated or striped volumes

In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance.

Below is an example of how to create striped volumes. First, create a volume type. Then define the extra spec for the volume type storagetype:stripecount representing the number of meta members in the striped volume. The example below means that each volume created under the GoldStriped volume type will be striped and made up of 4 meta members.

$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4
SSL support

Note

The ECOM component in Solutions Enabler enforces SSL in 8.3. By default, this port is 5989.

  1. Get the CA certificate of the ECOM server:

    # openssl s_client -showcerts -connect <ecom_hostname>.lss.emc.com:5989 </dev/null
    
  2. Copy the pem file to the system certificate directory:

    # cp <ecom_hostname>.lss.emc.com.pem /usr/share/ca-certificates/<ecom_hostname>.lss.emc.com.crt
    
  3. Update CA certificate database with the following commands (accept defaults):

    # dpkg-reconfigure ca-certificates
    # dpkg-reconfigure ca-certificates
    
  4. Update /etc/cinder/cinder.conf to reflect SSL functionality by adding the following to the back end block:

    driver_ssl_cert_verify = False
    driver_use_ssl = True
    driver_ssl_cert_path = /opt/stack/<ecom_hostname>.lss.emc.com.pem (Optional if Step 3 and 4 are skipped)
    
  5. Update EcomServerIp to ECOM host name and EcomServerPort to secure port (5989 by default) in /etc/cinder/cinder_emc_config_<conf_group>.xml.

Oversubscription support

Oversubscription support requires the /etc/cinder/cinder.conf to be updated with two additional tags max_over_subscription_ratio and reserved_percentage. In the sample below, the value of 2.0 for max_over_subscription_ratio means that the pools in oversubscribed by a factor of 2, or 200% oversubscribed. The reserved_percentage is the high water mark where by the physical remaining space cannot be exceeded. For example, if there is only 4% of physical space left and the reserve percentage is 5, the free space will equate to zero. This is a safety mechanism to prevent a scenario where a provisioning request fails due to insufficient raw space.

The parameter max_over_subscription_ratio and reserved_percentage are optional.

To set these parameter go to the configuration group of the volume type in /etc/cinder/cinder.conf.

[VMAX_ISCSI_SILVER]
cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_ISCSI_SILVER.xml
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
volume_backend_name = VMAX_ISCSI_SILVER
max_over_subscription_ratio = 2.0
reserved_percentage = 10

For the second iteration of over subscription, take into account the EMCMaxSubscriptionPercent property on the pool. This value is the highest that a pool can be oversubscribed.

Scenario 1

EMCMaxSubscriptionPercent is 200 and the user defined max_over_subscription_ratio is 2.5, the latter is ignored. Oversubscription is 200%.

Scenario 2

EMCMaxSubscriptionPercent is 200 and the user defined max_over_subscription_ratio is 1.5, 1.5 equates to 150% and is less than the value set on the pool. Oversubscription is 150%.

Scenario 3

EMCMaxSubscriptionPercent is 0. This means there is no upper limit on the pool. The user defined max_over_subscription_ratio is 1.5. Oversubscription is 150%.

Scenario 4

EMCMaxSubscriptionPercent is 0. max_over_subscription_ratio is not set by the user. We recommend to default to upper limit, this is 150%.

Note

If FAST is set and multiple pools are associated with a FAST policy, then the same rules apply. The difference is, the TotalManagedSpace and EMCSubscribedCapacity for each pool associated with the FAST policy are aggregated.

Scenario 5

EMCMaxSubscriptionPercent is 200 on one pool. It is 300 on another pool. The user defined max_over_subscription_ratio is 2.5. Oversubscription is 200% on the first pool and 250% on the other.

QoS (Quality of Service) support

Quality of service(QoS) has traditionally been associated with network bandwidth usage. Network administrators set limitations on certain networks in terms of bandwidth usage for clients. This enables them to provide a tiered level of service based on cost. The cinder QoS offers similar functionality based on volume type setting limits on host storage bandwidth per service offering. Each volume type is tied to specific QoS attributes that are unique to each storage vendor. The VMAX plugin offers limits via the following attributes:

  • By I/O limit per second (IOPS)
  • By limiting throughput per second (MB/S)
  • Dynamic distribution
  • The VMAX offers modification of QoS at the Storage Group level
USE CASE 1 - Default values

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
maxIOPS 4000
maxMBPS 4000
DistributionType Always
  1. Create QoS Specs with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver maxIOPS=4000 maxMBPS=4000 DistributionType=Always
    
  2. Associate QoS specs with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - 4000
  • Host I/O Limit (IO/Sec) - 4000
  • Set Dynamic Distribution - Always

Outcome - Block Storage (cinder)

Volume is created against volume type and QoS is enforced with the parameters above.

USE CASE 2 - Preset limits

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - 2000
  • Host I/O Limit (IO/Sec) - 2000
  • Set Dynamic Distribution - Never
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
maxIOPS 4000
maxMBPS 4000
DistributionType Always
  1. Create QoS specifications with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver maxIOPS=4000 maxMBPS=4000 DistributionType=Always
    
  2. Associate QoS specifications with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - 4000
  • Host I/O Limit (IO/Sec) - 4000
  • Set Dynamic Distribution - Always

Outcome - Block Storage (cinder)

Volume is created against volume type and QoS is enforced with the parameters above.

USE CASE 3 - Preset limits

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
DistributionType Always
  1. Create QoS specifications with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver DistributionType=Always
    
  2. Associate QoS specifications with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A

Outcome - Block Storage (cinder)

Volume is created against volume type and there is no QoS change.

USE CASE 4 - Preset limits

Prerequisites - VMAX

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A
Prerequisites - Block Storage (cinder) back end (storage group)
Key Value
DistributionType OnFailure
  1. Create QoS specifications with the prerequisite values above:

    cinder qos-create <name> <key=value> [<key=value> ...]
    
    $ cinder qos-create silver DistributionType=OnFailure
    
  2. Associate QoS specifications with specified volume type:

    cinder qos-associate <qos_specs id> <volume_type_id>
    
    $ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
    
  3. Create volume with the volume type indicated above:

    cinder create [--name <name>]  [--volume-type <volume-type>] size
    
    $ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
    

Outcome - VMAX (storage group)

  • Host I/O Limit (MB/Sec) - No Limit
  • Host I/O Limit (IO/Sec) - No Limit
  • Set Dynamic Distribution - N/A

Outcome - Block Storage (cinder)

Volume is created against volume type and there is no QoS change.

iSCSI multipathing support
  • Install open-iscsi on all nodes on your system
  • Do not install EMC PowerPath as they cannot co-exist with native multipath software
  • Multipath tools must be installed on all nova compute nodes

On Ubuntu:

# apt-get install open-iscsi           #ensure iSCSI is installed
# apt-get install multipath-tools      #multipath modules
# apt-get install sysfsutils sg3-utils #file system utilities
# apt-get install scsitools            #SCSI tools

On openSUSE and SUSE Linux Enterprise Server:

# zipper install open-iscsi           #ensure iSCSI is installed
# zipper install multipath-tools      #multipath modules
# zipper install sysfsutils sg3-utils #file system utilities
# zipper install scsitools            #SCSI tools

On Red Hat Enterprise Linux and CentOS:

# yum install iscsi-initiator-utils   #ensure iSCSI is installed
# yum install device-mapper-multipath #multipath modules
# yum install sysfsutils sg3-utils    #file system utilities
# yum install scsitools               #SCSI tools
Multipath configuration file

The multipath configuration file may be edited for better management and performance. Log in as a privileged user and make the following changes to /etc/multipath.conf on the Compute (nova) node(s).

devices {
# Device attributed for EMC VMAX
    device {
            vendor "EMC"
            product "SYMMETRIX"
            path_grouping_policy multibus
            getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted --device=/dev/%n"
            path_selector "round-robin 0"
            path_checker tur
            features "0"
            hardware_handler "0"
            prio const
            rr_weight uniform
            no_path_retry 6
            rr_min_io 1000
            rr_min_io_rq 1
    }
}

You may need to reboot the host after installing the MPIO tools or restart iSCSI and multipath services.

On Ubuntu:

# service open-iscsi restart
# service multipath-tools restart

On On openSUSE, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, and CentOS:

# systemctl restart open-iscsi
# systemctl restart multipath-tools
$ lsblk
NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                          8:0    0     1G  0 disk
..360000970000196701868533030303235 (dm-6) 252:6    0     1G  0 mpath
sdb                                          8:16   0     1G  0 disk
..360000970000196701868533030303235 (dm-6) 252:6    0     1G  0 mpath
vda                                        253:0    0     1T  0 disk
OpenStack configurations

On Compute (nova) node, add the following flag in the [libvirt] section of /etc/nova/nova.conf:

iscsi_use_multipath = True

On cinder controller node, set the multipath flag to true in /etc/cinder.conf:

use_multipath_for_image_xfer = True

Restart nova-compute and cinder-volume services after the change.

Verify you have multiple initiators available on the compute node for I/O
  1. Create a 3GB VMAX volume.

  2. Create an instance from image out of native LVM storage or from VMAX storage, for example, from a bootable volume

  3. Attach the 3GB volume to the new instance:

    $ multipath -ll
    mpath102 (360000970000196700531533030383039) dm-3 EMC,SYMMETRIX
    size=3G features='1 queue_if_no_path' hwhandler='0' wp=rw
    '-+- policy='round-robin 0' prio=1 status=active
    33:0:0:1 sdb 8:16 active ready running
    '- 34:0:0:1 sdc 8:32 active ready running
    
  4. Use the lsblk command to see the multipath device:

    $ lsblk
    NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    sdb                                          8:0    0     3G  0 disk
    ..360000970000196700531533030383039 (dm-6) 252:6    0     3G  0 mpath
    sdc                                          8:16   0     3G  0 disk
    ..360000970000196700531533030383039 (dm-6) 252:6    0     3G  0 mpath
    vda
    
Consistency group support

Consistency Groups operations are performed through the CLI using v2 of the cinder API.

/etc/cinder/policy.json may need to be updated to enable new API calls for Consistency groups.

Note

Even though the terminology is ‘Consistency Group’ in OpenStack, a Storage Group is created on the VMAX, and should not be confused with a VMAX Consistency Group which is an SRDF construct. The Storage Group is not associated with any FAST policy.

Operations
  • Create a Consistency Group:

    cinder --os-volume-api-version 2 consisgroup-create [--name <name>]
    [--description <description>] [--availability-zone <availability-zone>]
    <volume-types>
    
    $ cinder --os-volume-api-version 2 consisgroup-create --name bronzeCG2 volume_type_1
    
  • List Consistency Groups:

    cinder consisgroup-list [--all-tenants [<0|1>]]
    
    $ cinder consisgroup-list
    
  • Show a Consistency Group:

    cinder consisgroup-show <consistencygroup>
    
    $ cinder consisgroup-show 38a604b7-06eb-4202-8651-dbf2610a0827
    
  • Update a consistency Group:

    cinder consisgroup-update [--name <name>] [--description <description>]
    [--add-volumes <uuid1,uuid2,......>] [--remove-volumes <uuid3,uuid4,......>]
    <consistencygroup>
    

    Change name:

    $ cinder consisgroup-update --name updated_name 38a604b7-06eb-4202-8651-dbf2610a0827
    

    Add volume(s) to a Consistency Group:

    $ cinder consisgroup-update --add-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827
    

    Delete volume(s) from a Consistency Group:

    $ cinder consisgroup-update --remove-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827
    
  • Create a snapshot of a Consistency Group:

    cinder cgsnapshot-create [--name <name>] [--description <description>]
    <consistencygroup>
    
    $ cinder cgsnapshot-create 618d962d-2917-4cca-a3ee-9699373e6625
    
  • Delete a snapshot of a Consistency Group:

    cinder cgsnapshot-delete <cgsnapshot> [<cgsnapshot> ...]
    
    $ cinder cgsnapshot-delete 618d962d-2917-4cca-a3ee-9699373e6625
    
  • Delete a Consistency Group:

    cinder consisgroup-delete [--force] <consistencygroup> [<consistencygroup> ...]
    
    $ cinder consisgroup-delete --force 618d962d-2917-4cca-a3ee-9699373e6625
    
  • Create a Consistency group from source (the source can only be a CG snapshot):

    cinder consisgroup-create-from-src [--cgsnapshot <cgsnapshot>]
    [--source-cg <source-cg>] [--name <name>] [--description <description>]
    
    $ cinder consisgroup-create-from-src --source-cg 25dae184-1f25-412b-b8d7-9a25698fdb6d
    
  • You can also create a volume in a consistency group in one step:

    cinder create [--consisgroup-id <consistencygroup-id>] [--name <name>]
    [--description <description>] [--volume-type <volume-type>]
    [--availability-zone <availability-zone>] <size>
    
    $ cinder create --volume-type volume_type_1 --name cgBronzeVol --consisgroup-id 1de80c27-3b2f-47a6-91a7-e867cbe36462 1
    
Workload Planner (WLP)

VMAX Hybrid allows you to manage application storage by using Service Level Objectives (SLO) using policy based automation rather than the tiering in the VMAX2. The VMAX Hybrid comes with up to 6 SLO policies defined. Each has a set of workload characteristics that determine the drive types and mixes which will be used for the SLO. All storage in the VMAX Array is virtually provisioned, and all of the pools are created in containers called Storage Resource Pools (SRP). Typically there is only one SRP, however there can be more. Therefore, it is the same pool we will provision to but we can provide different SLO/Workload combinations.

The SLO capacity is retrieved by interfacing with Unisphere Workload Planner (WLP). If you do not set up this relationship then the capacity retrieved is that of the entire SRP. This can cause issues as it can never be an accurate representation of what storage is available for any given SLO and Workload combination.

Enabling WLP on Unisphere
  1. To enable WLP on Unisphere, click on the array‣Performance‣Settings.
  2. Set both the Real Time and the Root Cause Analysis.
  3. Click Register.

Note

This should be set up ahead of time (allowing for several hours of data collection), so that the Unisphere for VMAX Performance Analyzer can collect rated metrics for each of the supported element types.

Using TestSmiProvider to add statistics access point

After enabling WLP you must then enable SMI-S to gain access to the WLP data:

  1. Connect to the SMI-S Provider using TestSmiProvider.

  2. Navigate to the Active menu.

  3. Type reg and enter the noted responses to the questions:

    (EMCProvider:5989) ? reg
    Current list of statistics Access Points: ?
    Note: The current list will be empty if there are no existing Access Points.
    Add Statistics Access Point {y|n} [n]: y
    HostID [l2se0060.lss.emc.com]: ?
    Note: Enter the Unisphere for VMAX location using a fully qualified Host ID.
    Port [8443]: ?
    Note: The Port default is the Unisphere for VMAX default secure port. If the secure port
    is different for your Unisphere for VMAX setup, adjust this value accordingly.
    User [smc]: ?
    Note: Enter the Unisphere for VMAX username.
    Password [smc]: ?
    Note: Enter the Unisphere for VMAX password.
    
  4. Type reg again to view the current list:

    (EMCProvider:5988) ? reg
    Current list of statistics Access Points:
    HostIDs:
    l2se0060.lss.emc.com
    PortNumbers:
    8443
    Users:
    smc
    Add Statistics Access Point {y|n} [n]: n
    
EMC VNX driver

EMC VNX driver interacts with configured VNX array. It supports both iSCSI and FC protocol.

The VNX cinder driver performs the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command-line interface used for management, diagnostics, and reporting functions for VNX. It also supports both iSCSI and FC protocol.

System requirements
  • VNX Operational Environment for Block version 5.32 or higher.
  • VNX Snapshot and Thin Provisioning license should be activated for VNX.
  • Python library storops to interact with VNX.
  • Navisphere CLI v7.32 or higher is installed along with the driver.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Retype a volume.
  • Get volume statistics.
  • Create and delete consistency groups.
  • Create, list, and delete consistency group snapshots.
  • Modify consistency groups.
  • Efficient non-disruptive volume backup.
  • Create a cloned consistency group.
  • Create a consistency group from consistency group snapshots.
  • Replication v2.1 support.
Preparation

This section contains instructions to prepare the Block Storage nodes to use the EMC VNX driver. You should install the Navisphere CLI and ensure you have correct zoning configurations.

Install Navisphere CLI

Navisphere CLI needs to be installed on all Block Storage nodes within an OpenStack deployment. You need to download different versions for different platforms:

Install Python library storops

storops is a Python library that interacts with VNX array through Navisphere CLI. Use the following command to install the storops library:

$ pip install storops
Check array software

Make sure your have the following software installed for certain features:

Feature Software Required
All ThinProvisioning
All VNXSnapshots
FAST cache support FASTCache
Create volume with type compressed Compression
Create volume with type deduplicated Deduplication

Required software

You can check the status of your array software in the Software page of Storage System Properties. Here is how it looks like:

_images/emc-enabler.png
Network configuration

For the FC Driver, FC zoning is properly configured between the hosts and the VNX. Check Register FC port with VNX for reference.

For the iSCSI Driver, make sure your VNX iSCSI port is accessible by your hosts. Check Register iSCSI port with VNX for reference.

You can use initiator_auto_registration = True configuration to avoid registering the ports manually. Check the detail of the configuration in Back-end configuration for reference.

If you are trying to setup multipath, refer to Multipath setup.

Back-end configuration

Make the following changes in the /etc/cinder/cinder.conf file.

Minimum configuration

Here is a sample of minimum back-end configuration. See the following sections for the detail of each option. Set storage_protocol = iscsi if iSCSI protocol is used.

[DEFAULT]
enabled_backends = vnx_array1

[vnx_array1]
san_ip = 10.10.72.41
san_login = sysadmin
san_password = sysadmin
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc
Multiple back-end configuration

Here is a sample of a minimum back-end configuration. See following sections for the detail of each option. Set storage_protocol = iscsi if iSCSI protocol is used.

[DEFAULT]
enabled_backends = backendA, backendB

[backendA]
storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc

[backendB]
storage_vnx_pool_names = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc

The value of the option storage_protocol can be either fc or iscsi, which is case insensitive.

For more details on multiple back ends, see Configure multiple-storage back ends

Required configurations

IP of the VNX Storage Processors

Specify SP A or SP B IP to connect:

san_ip = <IP of VNX Storage Processor>

VNX login credentials

There are two ways to specify the credentials.

  • Use plain text username and password.

    Supply for plain username and password:

    san_login = <VNX account with administrator role>
    san_password = <password for VNX account>
    storage_vnx_authentication_type = global
    

    Valid values for storage_vnx_authentication_type are: global (default), local, and ldap.

  • Use Security file.

    This approach avoids the plain text password in your cinder configuration file. Supply a security file as below:

    storage_vnx_security_file_dir = <path to security file>
    

Check Unisphere CLI user guide or Authenticate by security file for how to create a security file.

Path to your Unisphere CLI

Specify the absolute path to your naviseccli:

naviseccli_path = /opt/Navisphere/bin/naviseccli

Driver’s storage protocol

  • For the FC Driver, add the following option:

    volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
    storage_protocol = fc
    
  • For iSCSI Driver, add the following option:

    volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
    storage_protocol = iscsi
    
Optional configurations
VNX pool names

Specify the list of pools to be managed, separated by commas. They should already exist in VNX.

storage_vnx_pool_names = pool 1, pool 2

If this value is not specified, all pools of the array will be used.

Initiator auto registration

When initiator_auto_registration is set to True, the driver will automatically register initiators to all working target ports of the VNX array during volume attaching (The driver will skip those initiators that have already been registered) if the option io_port_list is not specified in the cinder.conf file.

If the user wants to register the initiators with some specific ports but not register with the other ports, this functionality should be disabled.

When a comma-separated list is given to io_port_list, the driver will only register the initiator to the ports specified in the list and only return target port(s) which belong to the target ports in the io_port_list instead of all target ports.

  • Example for FC ports:

    io_port_list = a-1,B-3
    

    a or B is Storage Processor, number 1 and 3 are Port ID.

  • Example for iSCSI ports:

    io_port_list = a-1-0,B-3-0
    

    a or B is Storage Processor, the first numbers 1 and 3 are Port ID and the second number 0 is Virtual Port ID

Note

  • Rather than de-registered, the registered ports will be simply bypassed whatever they are in io_port_list or not.
  • The driver will raise an exception if ports in io_port_list do not exist in VNX during startup.
Force delete volumes in storage group

Some available volumes may remain in storage group on the VNX array due to some OpenStack timeout issue. But the VNX array do not allow the user to delete the volumes which are in storage group. Option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation.

When force_delete_lun_in_storagegroup is set to True in the back-end section, the driver will move the volumes out of the storage groups and then delete them if the user tries to delete the volumes that remain in the storage group on the VNX array.

The default value of force_delete_lun_in_storagegroup is False.

Over subscription in thin provisioning

Over subscription allows that the sum of all volume’s capacity (provisioned capacity) to be larger than the pool’s total capacity.

max_over_subscription_ratio in the back-end section is the ratio of provisioned capacity over total capacity.

The default value of max_over_subscription_ratio is 20.0, which means the provisioned capacity can be 20 times of the total capacity. If the value of this ratio is set larger than 1.0, the provisioned capacity can exceed the total capacity.

Storage group automatic deletion

For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances which are going to consume VNX Block Storage (using compute node’s host name as storage group’s name). All the volumes attached to the VM instances in a Compute node will be put into the storage group. If destroy_empty_storage_group is set to True, the driver will remove the empty storage group after its last volume is detached. For data safety, it does not suggest to set destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.

Initiator auto deregistration

Enabling storage group automatic deletion is the precondition of this function. If initiator_auto_deregistration is set to True is set, the driver will deregister all FC and iSCSI initiators of the host after its storage group is deleted.

FC SAN auto zoning

The EMC VNX driver supports FC SAN auto zoning when ZoneManager is configured and zoning_mode is set to fabric in cinder.conf. For ZoneManager configuration, refer to Fibre Channel Zone Manager.

Volume number threshold

In VNX, there is a limitation on the number of pool volumes that can be created in the system. When the limitation is reached, no more pool volumes can be created even if there is remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the volume limitation, the creation fails.

The default value of check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.

iSCSI initiators

iscsi_initiators is a dictionary of IP addresses of the iSCSI initiator ports on OpenStack Compute and Block Storage nodes which want to connect to VNX via iSCSI. If this option is configured, the driver will leverage this information to find an accessible iSCSI target portal for the initiator when attaching volume. Otherwise, the iSCSI target portal will be chosen in a relative random way.

Note

This option is only valid for iSCSI driver.

Here is an example. VNX will connect host1 with 10.0.0.1 and 10.0.0.2. And it will connect host2 with 10.0.0.3.

The key name (host1 in the example) should be the output of hostname command.

iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
Default timeout

Specify the timeout in minutes for operations like LUN migration, LUN creation, etc. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait.

The default value for this option is infinite.

default_timeout = 60
Max LUNs per storage group

The max_luns_per_storage_group specify the maximum number of LUNs in a storage group. Default value is 255. It is also the maximum value supported by VNX.

Ignore pool full threshold

If ignore_pool_full_threshold is set to True, driver will force LUN creation even if the full threshold of pool is reached. Default to False.

Extra spec options

Extra specs are used in volume types created in Block Storage as the preferred property of the volume.

The Block Storage scheduler will use extra specs to find the suitable back end for the volume and the Block Storage driver will create the volume based on the properties specified by the extra spec.

Use the following command to create a volume type:

$ cinder type-create "demoVolumeType"

Use the following command to update the extra spec of a volume type:

$ cinder type-key "demoVolumeType" set provisioning:type=thin thick_provisioning_support='<is> True'

The following sections describe the VNX extra keys.

Provisioning type
  • Key: provisioning:type

  • Possible Values:

    • thick

      Volume is fully provisioned.

      Run the following commands to create a thick volume type:

      $ cinder type-create "ThickVolumeType"
      $ cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
      
    • thin

      Volume is virtually provisioned.

      Run the following commands to create a thin volume type:

      $ cinder type-create "ThinVolumeType"
      $ cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
      
    • deduplicated

      Volume is thin and deduplication is enabled. The administrator shall go to VNX to configure the system level deduplication settings. To create a deduplicated volume, the VNX Deduplication license must be activated on VNX, and specify deduplication_support=True to let Block Storage scheduler find the proper volume back end.

      Run the following commands to create a deduplicated volume type:

      $ cinder type-create "DeduplicatedVolumeType"
      $ cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
      
    • compressed

      Volume is thin and compression is enabled. The administrator shall go to the VNX to configure the system level compression settings. To create a compressed volume, the VNX Compression license must be activated on VNX, and use compression_support=True to let Block Storage scheduler find a volume back end. VNX does not support creating snapshots on a compressed volume.

      Run the following commands to create a compressed volume type:

      $ cinder type-create "CompressedVolumeType"
      $ cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
      
  • Default: thick

Note

provisioning:type replaces the old spec key storagetype:provisioning. The latter one is obsolete since the Mitaka release.

Storage tiering support
  • Key: storagetype:tiering
  • Possible values:
    • StartHighThenAuto
    • Auto
    • HighestAvailable
    • LowestAvailable
    • NoMovement
  • Default: StartHighThenAuto

VNX supports fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the key fast_support='<is> True' to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering:

Run the following commands to create a volume type with tiering policy:

$ cinder type-create "ThinVolumeOnAutoTier"
$ cinder type-key "ThinVolumeOnAutoTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'

Note

The tiering policy cannot be applied to a deduplicated volume. Tiering policy of the deduplicated LUN align with the settings of the pool.

FAST cache support
  • Key: fast_cache_enabled
  • Possible values:
    • True
    • False
  • Default: False

VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. Volume will be created on the backend with FAST cache enabled when <is> True is specified.

Pool name
  • Key: pool_name
  • Possible values: name of the storage pool managed by cinder
  • Default: None

If the user wants to create a volume on a certain storage pool in a back end that manages multiple pools, a volume type with a extra spec specified storage pool should be created first, then the user can use this volume type to create the volume.

Run the following commands to create the volume type:

$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
Obsolete extra specs

Note

DO NOT use the following obsolete extra spec keys:

  • storagetype:provisioning
  • storagetype:pool
Advanced features
Snap copy
  • Metadata Key: snapcopy
  • Possible Values:
    • True or true
    • False or false
  • Default: False

VNX driver supports snap copy which accelerates the process for creating a copied volume.

By default, the driver will do full data copy when creating a volume from a snapshot or cloning a volume. This is time-consuming, especially for large volumes. When snap copy is used, driver creates a snapshot and mounts it as a volume for the 2 kinds of operations which will be instant even for large volumes.

To enable this functionality, append --metadata snapcopy=True when creating cloned volume or creating volume from snapshot.

$ cinder create --source-volid <source-void> --name "cloned_volume" --metadata snapcopy=True

Or

$ cinder create --snapshot-id <snapshot-id> --name "vol_from_snapshot" --metadata snapcopy=True

The newly created volume is a snap copy instead of a full copy. If a full copy is needed, retype or migrate can be used to convert the snap-copy volume to a full-copy volume which may be time-consuming.

You can determine whether the volume is a snap-copy volume or not by showing its metadata. If the snapcopy in metadata is True or true, the volume is a snap-copy volume. Otherwise, it is a full-copy volume.

$ cinder metadata-show <volume>

Constraints

  • The number of snap-copy volumes created from a single source volume is limited to 255 at one point in time.
  • The source volume which has snap-copy volume can not be deleted or migrated.
  • snapcopy volume will be change to full-copy volume after host-assisted or storage-assisted migration.
  • snapcopy volume can not be added to consisgroup because of VNX limitation.
Efficient non-disruptive volume backup

The default implementation in Block Storage for non-disruptive volume backup is not efficient since a cloned volume will be created during backup.

The approach of efficient backup is to create a snapshot for the volume and connect this snapshot (a mount point in VNX) to the Block Storage host for volume backup. This eliminates migration time involved in volume clone.

Constraints

  • Backup creation for a snap-copy volume is not allowed if the volume status is in-use since snapshot cannot be taken from this volume.
Configurable migration rate

VNX cinder driver is leveraging the LUN migration from the VNX. LUN migration is involved in cloning, migrating, retyping, and creating volume from snapshot. When admin set migrate_rate in volume’s metadata, VNX driver can start migration with specified rate. The available values for the migrate_rate are high, asap, low and medium.

The following is an example to set migrate_rate to asap:

$ cinder metadata <volume-id> set migrate_rate=asap

After set, any cinder volume operations involving VNX LUN migration will take the value as the migration rate. To restore the migration rate to default, unset the metadata as following:

$ cinder metadata <volume-id> unset migrate_rate

Note

Do not use the asap migration rate when the system is in production, as the normal host I/O may be interrupted. Use asap only when the system is offline (free of any host-level I/O).

Replication v2.1 support

Cinder introduces Replication v2.1 support in Mitaka, it supports fail-over and fail-back replication for specific back end. In VNX cinder driver, MirrorView is used to set up replication for the volume.

To enable this feature, you need to set configuration in cinder.conf as below:

replication_device = backend_id:<secondary VNX serial number>,
                     san_ip:192.168.1.2,
                     san_login:admin,
                     san_password:admin,
                     naviseccli_path:/opt/Navisphere/bin/naviseccli,
                     storage_vnx_authentication_type:global,
                     storage_vnx_security_file_dir:

Currently, only synchronized mode MirrorView is supported, and one volume can only have 1 secondary storage system. Therefore, you can have only one replication_device presented in driver configuration section.

To create a replication enabled volume, you need to create a volume type:

$ cinder type-create replication-type
$ cinder type-key replication-type set replication_enabled="<is> True"

And then create volume with above volume type:

$ cinder create --volume-type replication-type --name replication-volume 1

Supported operations

  • Create volume

  • Create cloned volume

  • Create volume from snapshot

  • Fail-over volume:

    $ cinder failover-host --backend_id <secondary VNX serial number> <hostname>
    
  • Fail-back volume:

    $ cinder failover-host --backend_id default <hostname>
    

Requirements

  • 2 VNX systems must be in same domain.
  • For iSCSI MirrorView, user needs to setup iSCSI connection before enable replication in Cinder.
  • For FC MirrorView, user needs to zone specific FC ports from 2 VNX system together.
  • MirrorView Sync enabler( MirrorView/S ) installed on both systems.
  • Write intent log enabled on both VNX systems.

For more information on how to configure, please refer to: MirrorView-Knowledgebook:-Releases-30-–-33

Best practice
Multipath setup

Enabling multipath volume access is recommended for robust data access. The major configuration includes:

  1. Install multipath-tools, sysfsutils and sg3-utils on the nodes hosting Nova-Compute and Cinder-Volume services. Check the operating system manual for the system distribution for specific installation steps. For Red Hat based distributions, they should be device-mapper-multipath, sysfsutils and sg3_utils.
  2. Specify use_multipath_for_image_xfer=true in the cinder.conf file for each FC/iSCSI back end.
  3. Specify iscsi_use_multipath=True in libvirt section of the nova.conf file. This option is valid for both iSCSI and FC driver.

For multipath-tools, here is an EMC recommended sample of /etc/multipath.conf file.

user_friendly_names is not specified in the configuration and thus it will take the default value no. It is not recommended to set it to yes because it may fail operations such as VM live migration.

blacklist {
    # Skip the files under /dev that are definitely not FC/iSCSI devices
    # Different system may need different customization
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

    # Skip LUNZ device from VNX
    device {
        vendor "DGC"
        product "LUNZ"
        }
}

defaults {
    user_friendly_names no
    flush_on_last_del yes
}

devices {
    # Device attributed for EMC CLARiiON and VNX series ALUA
    device {
        vendor "DGC"
        product ".*"
        product_blacklist "LUNZ"
        path_grouping_policy group_by_prio
        path_selector "round-robin 0"
        path_checker emc_clariion
        features "1 queue_if_no_path"
        hardware_handler "1 alua"
        prio alua
        failback immediate
    }
}

Note

When multipath is used in OpenStack, multipath faulty devices may come out in Nova-Compute nodes due to different issues (Bug 1336683 is a typical example).

A solution to completely avoid faulty devices has not been found yet. faulty_device_cleanup.py mitigates this issue when VNX iSCSI storage is used. Cloud administrators can deploy the script in all Nova-Compute nodes and use a CRON job to run the script on each Nova-Compute node periodically so that faulty devices will not stay too long. Refer to: VNX faulty device cleanup for detailed usage and the script.

Restrictions and limitations
iSCSI port cache

EMC VNX iSCSI driver caches the iSCSI ports information, so that the user should restart the cinder-volume service or wait for seconds (which is configured by periodic_interval in the cinder.conf file) before any volume attachment operation after changing the iSCSI port configurations. Otherwise the attachment may fail because the old iSCSI port configurations were used.

No extending for volume with snapshots

VNX does not support extending the thick volume which has a snapshot. If the user tries to extend a volume which has a snapshot, the status of the volume would change to error_extending.

Limitations for deploying cinder on computer node

It is not recommended to deploy the driver on a compute node if cinder upload-to-image --force True is used against an in-use volume. Otherwise, cinder upload-to-image --force True will terminate the data access of the vm instance to the volume.

Storage group with host names in VNX

When the driver notices that there is no existing storage group that has the host name as the storage group name, it will create the storage group and also add the compute node’s or Block Storage node’s registered initiators into the storage group.

If the driver notices that the storage group already exists, it will assume that the registered initiators have also been put into it and skip the operations above for better performance.

It is recommended that the storage administrator does not create the storage group manually and instead relies on the driver for the preparation. If the storage administrator needs to create the storage group manually for some special requirements, the correct registered initiators should be put into the storage group as well (otherwise the following volume attaching operations will fail).

EMC storage-assisted volume migration

EMC VNX driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False <volume_id> <host> or cinder migrate <volume_id> <host>, cinder will try to leverage the VNX’s native volume migration functionality.

In following scenarios, VNX storage-assisted volume migration will not be triggered:

  • in-use volume migration between back ends with different storage protocol, for example, FC and iSCSI.
  • Volume is to be migrated across arrays.
Appendix
Authenticate by security file

VNX credentials are necessary when the driver connects to the VNX system. Credentials in global, local and ldap scopes are supported. There are two approaches to provide the credentials.

The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.

  1. Find out the Linux user id of the cinder-volume processes. Assuming the cinder-volume service is running by the account cinder.

  2. Run su as root user.

  3. In /etc/passwd file, change cinder:x:113:120::/var/lib/cinder:/bin/false to cinder:x:113:120::/var/lib/cinder:/bin/bash (This temporary change is to make step 4 work.)

  4. Save the credentials on behalf of cinder user to a security file (assuming the array credentials are admin/admin in global scope). In the command below, the -secfilepath switch is used to specify the location to save the security file.

    # su -l cinder -c '/opt/Navisphere/bin/naviseccli \
      -AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
    
  5. Change cinder:x:113:120::/var/lib/cinder:/bin/bash back to cinder:x:113:120::/var/lib/cinder:/bin/false in /etc/passwd file.

  6. Remove the credentials options san_login, san_password and storage_vnx_authentication_type from cinder.conf file. (normally it is /etc/cinder/cinder.conf file). Add option storage_vnx_security_file_dir and set its value to the directory path of your security file generated in the above step. Omit this option if -secfilepath is not used in the above step.

  7. Restart the cinder-volume service to validate the change.

Register FC port with VNX

This configuration is only required when initiator_auto_registration=False.

To access VNX storage, the Compute nodes should be registered on VNX first if initiator auto registration is not enabled.

To perform Copy Image to Volume and Copy Volume to Image operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.

The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).

  1. Assume 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 is the WWN of a FC initiator port name of the compute node whose host name and IP are myhost1 and 10.10.61.1. Register 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 in Unisphere:
  2. Log in to Unisphere, go to FNM0000000000 > Hosts > Initiators.
  3. Refresh and wait until the initiator 20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2 with SP Port A-1 appears.
  4. Click the Register button, select CLARiiON/VNX and enter the host name (which is the output of the hostname command) and IP address:
    • Hostname: myhost1
    • IP: 10.10.61.1
    • Click Register.
  5. Then host 10.10.61.1 will appear under Hosts > Host List as well.
  6. Register the wwn with more ports if needed.
Register iSCSI port with VNX

This configuration is only required when initiator_auto_registration=False.

To access VNX storage, the compute nodes should be registered on VNX first if initiator auto registration is not enabled.

To perform Copy Image to Volume and Copy Volume to Image operations, the nodes running the cinder-volume service (Block Storage nodes) must be registered with the VNX as well.

The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).

  1. On the compute node with IP address 10.10.61.1 and host name myhost1, execute the following commands (assuming 10.10.61.35 is the iSCSI target):

    1. Start the iSCSI initiator service on the node:

      # /etc/init.d/open-iscsi start
      
    2. Discover the iSCSI target portals on VNX:

      # iscsiadm -m discovery -t st -p 10.10.61.35
      
    3. Change directory to /etc/iscsi :

      # cd /etc/iscsi
      
    4. Find out the iqn of the node:

      # more initiatorname.iscsi
      
  2. Log in to VNX from the compute node using the target corresponding to the SPA port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
    
  3. Assume iqn.1993-08.org.debian:01:1a2b3c4d5f6g is the initiator name of the compute node. Register iqn.1993-08.org.debian:01:1a2b3c4d5f6g in Unisphere:

    1. Log in to Unisphere, go to FNM0000000000 > Hosts > Initiators.
    2. Refresh and wait until the initiator iqn.1993-08.org.debian:01:1a2b3c4d5f6g with SP Port A-8v0 appears.
    3. Click the Register button, select CLARiiON/VNX and enter the host name (which is the output of the hostname command) and IP address:
      • Hostname: myhost1
      • IP: 10.10.61.1
      • Click Register.
    4. Then host 10.10.61.1 will appear under Hosts > Host List as well.
  4. Log out iSCSI on the node:

    # iscsiadm -m node -u
    
  5. Log in to VNX from the compute node using the target corresponding to the SPB port:

    # iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
    
  6. In Unisphere, register the initiator with the SPB port.

  7. Log out iSCSI on the node:

    # iscsiadm -m node -u
    
  8. Register the iqn with more ports if needed.

EMC XtremIO Block Storage driver configuration

The high performance XtremIO All Flash Array (AFA) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtremIO Storage cluster.

This section explains how to configure and connect the block storage nodes to an XtremIO storage cluster.

Support matrix

XtremIO version 4.x is supported.

Supported operations
  • Create, delete, clone, attach, and detach volumes.
  • Create and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Extend a volume.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Get volume statistics.
  • Create, modify, delete, and list consistency groups.
  • Create, modify, delete, and list snapshots of consistency groups.
  • Create consistency group from consistency group or consistency group snapshot.
  • Volume Migration (host assisted)
XtremIO Block Storage driver configuration

Edit the cinder.conf file by adding the configuration below under the [DEFAULT] section of the file in case of a single back end or under a separate section in case of multiple back ends (for example [XTREMIO]). The configuration file is usually located under the following path /etc/cinder/cinder.conf.

Description of EMC XtremIO volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
xtremio_array_busy_retry_count = 5 (Integer) Number of retries in case array is busy
xtremio_array_busy_retry_interval = 5 (Integer) Interval between retries in case array is busy
xtremio_cluster_name = (String) XMS cluster id in multi-cluster environment
xtremio_volumes_per_glance_cache = 100 (Integer) Number of volumes created from each cached glance image

For a configuration example, refer to the configuration Configuration example.

XtremIO driver name

Configure the driver name by setting the following parameter in the cinder.conf file:

  • For iSCSI:

    volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver
    
  • For Fibre Channel:

    volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
    
XtremIO management server (XMS) IP

To retrieve the management IP, use the show-xms CLI command.

Configure the management IP by adding the following parameter:

san_ip = XMS Management IP
XtremIO cluster name

In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In such setups, the administrator is required to specify the cluster name (in addition to the XMS IP). Each cluster must be defined as a separate back end.

To retrieve the cluster name, run the show-clusters CLI command.

Configure the cluster name by adding the following parameter:

xtremio_cluster_name = Cluster-Name

Note

When a single cluster is managed in XtremIO version 4.0, the cluster name is not required.

XtremIO user credentials

OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role.

Refer to the XtremIO User Guide for details on user account management.

Create an XMS account using either the XMS GUI or the add-user-account CLI command.

Configure the user credentials by adding the following parameters:

san_login = XMS username
san_password = XMS username password
Multiple back ends

Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources.

When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.

Setting thin provisioning and multipathing parameters

To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:

  • Thin Provisioning

    All XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the max_over_subscription_ratio parameter.

    The use_cow_images parameter in the nova.conf file should be set to False as follows:

    use_cow_images = False
    
  • Multipathing

    The use_multipath_for_image_xfer parameter in the cinder.conf file should be set to True as follows:

    use_multipath_for_image_xfer = True
    
Image service optimization

Limit the number of copies (XtremIO snapshots) taken from each image cache.

xtremio_volumes_per_glance_cache = 100

The default value is 100. A value of 0 ignores the limit and defers to the array maximum as the effective limit.

SSL certification

To enable SSL certificate validation, modify the following option in the cinder.conf file:

driver_ssl_cert_verify = true

By default, SSL certificate validation is disabled.

To specify a non-default path to CA_Bundle file or directory with certificates of trusted CAs:

driver_ssl_cert_path = Certificate path
Configuring CHAP

The XtremIO Block Storage driver supports CHAP initiator authentication and discovery.

If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator.

To set the CHAP initiator mode using CLI, run the following XMCLI command:

$ modify-chap chap-authentication-mode=initiator

If CHAP initiator discovery is required, set the CHAP discovery mode to initiator.

To set the CHAP initiator discovery mode using CLI, run the following XMCLI command:

$ modify-chap chap-discovery-mode=initiator

The CHAP initiator modes can also be set via the XMS GUI.

Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.

The CHAP initiator authentication and discovery credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS.

Configuration example

You can update the cinder.conf file by editing the necessary parameters as follows:

[Default]
enabled_backends = XtremIO

[XtremIO]
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
san_ip = XMS_IP
xtremio_cluster_name = Cluster01
san_login = XMS_USER
san_password = XMS_PASSWD
volume_backend_name = XtremIOAFA
Fujitsu ETERNUS DX driver

Fujitsu ETERNUS DX driver provides FC and iSCSI support for ETERNUS DX S3 series.

The driver performs volume operations by communicating with ETERNUS DX. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.

You can specify RAID Group and Thin Provisioning Pool (TPP) in ETERNUS DX as a storage pool.

System requirements

Supported storages:

  • ETERNUS DX60 S3
  • ETERNUS DX100 S3/DX200 S3
  • ETERNUS DX500 S3/DX600 S3
  • ETERNUS DX8700 S3/DX8900 S3
  • ETERNUS DX200F

Requirements:

  • Firmware version V10L30 or later is required.
  • The multipath environment with ETERNUS Multipath Driver is unsupported.
  • An Advanced Copy Feature license is required to create a snapshot and a clone.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume. (*1)
  • Get volume statistics.

(*1): It is executable only when you use TPP as a storage pool.

Preparation
Package installation

Install the python-pywbem package for your distribution.

  • On Ubuntu:

    # apt-get install python-pywbem
    
  • On openSUSE:

    # zypper install python-pywbem
    
  • On Red Hat Enterprise Linux, CentOS, and Fedora:

    # yum install pywbem
    
ETERNUS DX setup

Perform the following steps using ETERNUS Web GUI or ETERNUS CLI.

Note

  • These following operations require an account that has the Admin role.
  • For detailed operations, refer to ETERNUS Web GUI User’s Guide or ETERNUS CLI User’s Guide for ETERNUS DX S3 series.
  1. Create an account for communication with cinder controller.

  2. Enable the SMI-S of ETERNUS DX.

  3. Register an Advanced Copy Feature license and configure copy table size.

  4. Create a storage pool for volumes.

  5. (Optional) If you want to create snapshots on a different storage pool for volumes, create a storage pool for snapshots.

  6. Create Snap Data Pool Volume (SDPV) to enable Snap Data Pool (SDP) for “create a snapshot”.

  7. Configure storage ports used for OpenStack.

    • Set those storage ports to CA mode.

    • Enable the host-affinity settings of those storage ports.

      (ETERNUS CLI command for enabling host-affinity settings):

      CLI> set fc-parameters -host-affinity enable -port <CM#><CA#><Port#>
      CLI> set iscsi-parameters -host-affinity enable -port <CM#><CA#><Port#>
      
  8. Ensure LAN connection between cinder controller and MNT port of ETERNUS DX and SAN connection between Compute nodes and CA ports of ETERNUS DX.

Configuration
  1. Edit cinder.conf.

    Add the following entries to /etc/cinder/cinder.conf:

    FC entries:

    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
    cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
    

    iSCSI entries:

    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
    cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
    

    If there is no description about cinder_eternus_config_file,

    then the parameter is set to default value /etc/cinder/cinder_fujitsu_eternus_dx.xml.

  2. Create a driver configuration file.

    Create a driver configuration file in the file path specified as cinder_eternus_config_file in cinder.conf, and add parameters to the file as below:

    FC configuration:

    <?xml version='1.0' encoding='UTF-8'?>
    <FUJITSU>
    <EternusIP>0.0.0.0</EternusIP>
    <EternusPort>5988</EternusPort>
    <EternusUser>smisuser</EternusUser>
    <EternusPassword>smispassword</EternusPassword>
    <EternusPool>raid5_0001</EternusPool>
    <EternusSnapPool>raid5_0001</EternusSnapPool>
    </FUJITSU>
    

    iSCSI configuration:

    <?xml version='1.0' encoding='UTF-8'?>
    <FUJITSU>
    <EternusIP>0.0.0.0</EternusIP>
    <EternusPort>5988</EternusPort>
    <EternusUser>smisuser</EternusUser>
    <EternusPassword>smispassword</EternusPassword>
    <EternusPool>raid5_0001</EternusPool>
    <EternusSnapPool>raid5_0001</EternusSnapPool>
    <EternusISCSIIP>1.1.1.1</EternusISCSIIP>
    <EternusISCSIIP>1.1.1.2</EternusISCSIIP>
    <EternusISCSIIP>1.1.1.3</EternusISCSIIP>
    <EternusISCSIIP>1.1.1.4</EternusISCSIIP>
    </FUJITSU>
    

    Where:

    EternusIP

    IP address for the SMI-S connection of the ETRENUS DX.

    Enter the IP address of MNT port of the ETERNUS DX.

    EternusPort

    Port number for the SMI-S connection port of the ETERNUS DX.

    EternusUser

    User name for the SMI-S connection of the ETERNUS DX.

    EternusPassword

    Password for the SMI-S connection of the ETERNUS DX.

    EternusPool

    Storage pool name for volumes.

    Enter RAID Group name or TPP name in the ETERNUS DX.

    EternusSnapPool

    Storage pool name for snapshots.

    Enter RAID Group name in the ETERNUS DX.

    EternusISCSIIP (Multiple setting allowed)

    iSCSI connection IP address of the ETERNUS DX.

    Note

    • For EternusSnapPool, you can specify only RAID Group name and cannot specify TPP name.
    • You can specify the same RAID Group name for EternusPool and EternusSnapPool if you create volumes and snapshots on a same storage pool.
Configuration example
  1. Edit cinder.conf:

    [DEFAULT]
    enabled_backends = DXFC, DXISCSI
    
    [DXFC]
    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
    cinder_eternus_config_file = /etc/cinder/fc.xml
    volume_backend_name = FC
    
    [DXISCSI]
    volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
    cinder_eternus_config_file = /etc/cinder/iscsi.xml
    volume_backend_name = ISCSI
    
  2. Create the driver configuration files fc.xml and iscsi.xml.

  3. Create a volume type and set extra specs to the type:

    $ cinder type-create DX_FC
    $ cinder type-key DX_FC set volume_backend_name=FC
    $ cinder type-create DX_ISCSI
    $ cinder type-key DX_ISCSI set volume_backend_name=ISCSI
    

    By issuing these commands, the volume type DX_FC is associated with the FC, and the type DX_ISCSI is associated with the ISCSI.

Hitachi NAS Platform iSCSI and NFS drivers

This OpenStack Block Storage volume drivers provides iSCSI and NFS support for Hitachi NAS Platform (HNAS) Models 3080, 3090, 4040, 4060, 4080, and 4100 with NAS OS 12.2 or higher.

Supported operations

The NFS and iSCSI drivers support these operations:

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Manage and unmanage a volume.
  • Manage and unmanage snapshots (HNAS NFS only)
HNAS storage requirements

Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to configure HNAS to work with the drivers. Additionally:

  1. General:
  • It is mandatory to have at least 1 storage pool, 1 EVS and 1 file system to be able to run any of the HNAS drivers.
  • HNAS drivers consider the space allocated to the file systems to provide the reports to cinder. So, when creating a file system, make sure it has enough space to fit your needs.
  • The file system used should not be created as a replication target and should be mounted.
  • It is possible to configure HNAS drivers to use distinct EVSs and file systems, but all compute nodes and controllers in the cloud must have access to the EVSs.
  1. For NFS:
  • Create NFS exports, choose a path for them (it must be different from /) and set the :guilabel: Show snapshots option to hide and disable access.
  • For each export used, set the option norootsquash in the share Access configuration so Block Storage services can change the permissions of its volumes. For example, "* (rw, norootsquash)".
  • Make sure that all computes and controllers have R/W access to the shares used by cinder HNAS driver.
  • In order to use the hardware accelerated features of HNAS NFS, we recommend setting max-nfs-version to 3. Refer to Hitachi NAS Platform command line reference to see how to configure this option.
  1. For iSCSI:
  • You must set an iSCSI domain to EVS.
Block Storage host requirements

The HNAS drivers are supported for Red Hat Enterprise Linux OpenStack Platform, SUSE OpenStack Cloud, and Ubuntu OpenStack. The following packages must be installed in all compute, controller and storage (if any) nodes:

  • nfs-utils for Red Hat Enterprise Linux OpenStack Platform
  • nfs-client for SUSE OpenStack Cloud
  • nfs-common, libc6-i386 for Ubuntu OpenStack
Package installation

If you are installing the driver from an RPM or DEB package, follow the steps below:

  1. Install the dependencies:

    In Red Hat:

    # yum install nfs-utils nfs-utils-lib
    

    Or in Ubuntu:

    # apt-get install nfs-common
    

    Or in SUSE:

    # zypper install nfs-client
    

    If you are using Ubuntu 12.04, you also need to install libc6-i386

    # apt-get install libc6-i386
    
  2. Configure the driver as described in the Driver configuration section.

  3. Restart all Block Storage services (volume, scheduler, and backup).

Driver configuration

HNAS supports a variety of storage options and file system capabilities, which are selected through the definition of volume types combined with the use of multiple back ends and multiple services. Each back end can configure up to 4 service pools, which can be mapped to cinder volume types.

The configuration for the driver is read from the back-end sections of the cinder.conf. Each back-end section must have the appropriate configurations to communicate with your HNAS back end, such as the IP address of the HNAS EVS that is hosting your data, HNAS SSH access credentials, the configuration of each of the services in that back end, and so on. You can find examples of such configurations in the Configuration example section.

Note

HNAS cinder drivers still support the XML configuration the same way it was in the older versions, but we recommend configuring the HNAS cinder drivers only through the cinder.conf file, since the XML configuration file from previous versions is being deprecated as of Newton Release.

Note

We do not recommend the use of the same NFS export or file system (iSCSI driver) for different back ends. If possible, configure each back end to use a different NFS export/file system.

The following is the definition of each configuration option that can be used in a HNAS back-end section in the cinder.conf file:

Configuration options in cinder.conf
Option Type Default Description
volume_backend_name Optional N/A A name that identifies the back end and can be used as an extra-spec to redirect the volumes to the referenced back end.
volume_driver Required N/A The python module path to the HNAS volume driver python class. When installing through the rpm or deb packages, you should configure this to cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver for the iSCSI back end or cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver for the NFS back end.
nfs_shares_config Required (only for NFS) /etc/cinder/nfs_shares Path to the nfs_shares file. This is required by the base cinder generic NFS driver and therefore also required by the HNAS NFS driver. This file should list, one per line, every NFS share being used by the back end. For example, all the values found in the configuration keys hnas_svcX_hdp in the HNAS NFS back-end sections.
hnas_mgmt_ip0 Required N/A HNAS management IP address. Should be the IP address of the Admin EVS. It is also the IP through which you access the web SMU administration frontend of HNAS.
hnas_chap_enabled Optional (iSCSI only) True Boolean tag used to enable CHAP authentication protocol for iSCSI driver.
hnas_username Required N/A HNAS SSH username
hds_hnas_nfs_config_file | hds_hnas_iscsi_config_file Optional (deprecated) /opt/hds/hnas/cinder_[nfs|iscsi]_conf.xml Path to the deprecated XML configuration file (only required if using the XML file)
hnas_cluster_admin_ip0 Optional (required only for HNAS multi-farm setups) N/A The IP of the HNAS farm admin. If your SMU controls more than one system or cluster, this option must be set with the IP of the desired node. This is different for HNAS multi-cluster setups, which does not require this option to be set.
hnas_ssh_private_key Optional N/A Path to the SSH private key used to authenticate to the HNAS SMU. Only required if you do not want to set hnas_password.
hnas_ssh_port Optional 22 Port on which HNAS is listening for SSH connections
hnas_password Required (unless hnas_ssh_private_key is provided) N/A HNAS password
hnas_svcX_hdp [1] Required (at least 1) N/A HDP (export or file system) where the volumes will be created. Use exports paths for the NFS backend or the file system names for the iSCSI backend (note that when using the file system name, it does not contain the IP addresses of the HDP)
hnas_svcX_iscsi_ip Required (only for iSCSI) N/A The IP of the EVS that contains the file system specified in hnas_svcX_hdp
hnas_svcX_volume_type Required N/A A unique string that is used to refer to this pool within the context of cinder. You can tell cinder to put volumes of a specific volume type into this back end, within this pool. See, Service Labels and Configuration example sections for more details.
[1]Replace X with a number from 0 to 3 (keep the sequence when configuring the driver)
Service labels

HNAS driver supports differentiated types of service using the service labels. It is possible to create up to 4 types of them for each back end. (For example gold, platinum, silver, ssd, and so on).

After creating the services in the cinder.conf configuration file, you need to configure one cinder volume_type per service. Each volume_type must have the metadata service_label with the same name configured in the hnas_svcX_volume_type option of that service. See the Configuration example section for more details. If the volume_type is not set, the cinder service pool with largest available free space or other criteria configured in scheduler filters.

$ cinder type-create default
$ cinder type-key default set service_label=default
$ cinder type-create platinum-tier
$ cinder type-key platinum set service_label=platinum
Multi-backend configuration

You can deploy multiple OpenStack HNAS Driver instances (back ends) that each controls a separate HNAS or a single HNAS. If you use multiple cinder back ends, remember that each cinder back end can host up to 4 services. Each back-end section must have the appropriate configurations to communicate with your HNAS back end, such as the IP address of the HNAS EVS that is hosting your data, HNAS SSH access credentials, the configuration of each of the services in that back end, and so on. You can find examples of such configurations in the Configuration example section.

If you want the volumes from a volume_type to be casted into a specific back end, you must configure an extra_spec in the volume_type with the value of the volume_backend_name option from that back end.

For multiple NFS back ends configuration, each back end should have a separated nfs_shares_config and also a separated nfs_shares file defined (For example, nfs_shares1, nfs_shares2) with the desired shares listed in separated lines.

SSH configuration

Note

As of the Newton OpenStack release, the user can no longer run the driver using a locally installed instance of the SSC utility package. Instead, all communications with the HNAS back end are handled through SSH.

You can use your username and password to authenticate the Block Storage node to the HNAS back end. In order to do that, simply configure hnas_username and hnas_password in your back end section within the cinder.conf file.

For example:

[hnas-backend]
…
hnas_username = supervisor
hnas_password = supervisor

Alternatively, the HNAS cinder driver also supports SSH authentication through public key. To configure that:

  1. If you do not have a pair of public keys already generated, create it in the Block Storage node (leave the pass-phrase empty):

    $ mkdir -p /opt/hitachi/ssh
    $ ssh-keygen -f /opt/hds/ssh/hnaskey
    
  2. Change the owner of the key to cinder (or the user the volume service will be run as):

    # chown -R cinder.cinder /opt/hitachi/ssh
    
  3. Create the directory ssh_keys in the SMU server:

    $ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
    
  4. Copy the public key to the ssh_keys directory:

    $ scp /opt/hitachi/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
    
  5. Access the SMU server:

    $ ssh [manager|supervisor]@<smu-ip>
    
  6. Run the command to register the SSH keys:

    $ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
    
  7. Check the communication with HNAS in the Block Storage node:

    For multi-farm HNAS:

    $ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
    

    Or, for Single-node/Multi-Cluster:

    $ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc localhost df -a'
    
  8. Configure your backend section in cinder.conf to use your public key:

    [hnas-backend]
    …
    hnas_ssh_private_key = /opt/hitachi/ssh/hnaskey
    
Managing volumes

If there are some existing volumes on HNAS that you want to import to cinder, it is possible to use the manage volume feature to do this. The manage action on an existing volume is very similar to a volume creation. It creates a volume entry on cinder database, but instead of creating a new volume in the back end, it only adds a link to an existing volume.

Note

It is an admin only feature and you have to be logged as an user with admin rights to be able to use this.

For NFS:

  1. Under the System > Volumes tab, choose the option Manage Volume.
  2. Fill the fields Identifier, Host, Volume Name, and Volume Type with volume information to be managed:
    • Identifier: ip:/type/volume_name (For example: 172.24.44.34:/silver/volume-test)
    • Host: host@backend-name#pool_name (For example: ubuntu@hnas-nfs#test_silver)
    • Volume Name: volume_name (For example: volume-test)
    • Volume Type: choose a type of volume (For example: silver)

For iSCSI:

  1. Under the System > Volumes tab, choose the option Manage Volume.
  2. Fill the fields Identifier, Host, Volume Name, and Volume Type with volume information to be managed:
    • Identifier: filesystem-name/volume-name (For example: filesystem-test/volume-test)
    • Host: host@backend-name#pool_name (For example: ubuntu@hnas-iscsi#test_silver)
    • Volume Name: volume_name (For example: volume-test)
    • Volume Type: choose a type of volume (For example: silver)

By CLI:

$ cinder manage [--id-type <id-type>][--name <name>][--description <description>]
[--volume-type <volume-type>][--availability-zone <availability-zone>]
[--metadata [<key=value> [<key=value> ...]]][--bootable] <host> <identifier>

Example:

For NFS:

$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-nfs#test_silver 172.24.44.34:/silver/volume-test

For iSCSI:

$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-iscsi#test_silver filesystem-test/volume-test
Managing snapshots

The manage snapshots feature works very similarly to the manage volumes feature, currently supported on HNAS cinder drivers. So, if you have a volume already managed by cinder which has snapshots that are not managed by cinder, it is possible to use manage snapshots to import these snapshots and link them with their original volume.

Note

For HNAS NFS cinder driver, the snapshots of volumes are clones of volumes that where created using file-clone-create, not the HNAS snapshot-* feature. Check the HNAS users documentation to have details about those 2 features.

Currently, the manage snapshots function does not support importing snapshots (generally created by storage’s file-clone operation) without parent volumes or when the parent volume is in-use. In this case, the manage volumes should be used to import the snapshot as a normal cinder volume.

Also, it is an admin only feature and you have to be logged as a user with admin rights to be able to use this.

Note

Although there is a verification to prevent importing snapshots using non-related volumes as parents, it is possible to manage a snapshot using any related cloned volume. So, when managing a snapshot, it is extremely important to make sure that you are using the correct parent volume.

For NFS:

$ cinder snapshot-manage <volume> <identifier>
  • Identifier: evs_ip:/export_name/snapshot_name (For example: 172.24.44.34:/export1/snapshot-test)
  • Volume: Parent volume ID (For example: 061028c0-60cf-499f-99e2-2cd6afea081f)

Example:

$ cinder snapshot-manage 061028c0-60cf-499f-99e2-2cd6afea081f 172.24.44.34:/export1/snapshot-test

Note

This feature is currently available only for HNAS NFS Driver.

Configuration example

Below are configuration examples for both NFS and iSCSI backends:

  1. HNAS NFS Driver

    1. For HNAS NFS driver, create this section in your cinder.conf file:

      [hnas-nfs]
      volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
      nfs_shares_config = /home/cinder/nfs_shares
      volume_backend_name = hnas_nfs_backend
      hnas_username = supervisor
      hnas_password = supervisor
      hnas_mgmt_ip0 = 172.24.44.15
      
      hnas_svc0_volume_type = nfs_gold
      hnas_svc0_hdp = 172.24.49.21:/gold_export
      
      hnas_svc1_volume_type = nfs_platinum
      hnas_svc1_hdp = 172.24.49.21:/silver_platinum
      
      hnas_svc2_volume_type = nfs_silver
      hnas_svc2_hdp = 172.24.49.22:/silver_export
      
      hnas_svc3_volume_type = nfs_bronze
      hnas_svc3_hdp = 172.24.49.23:/bronze_export
      
    2. Add it to the enabled_backends list, under the DEFAULT section of your cinder.conf file:

      [DEFAULT]
      enabled_backends = hnas-nfs
      
    3. Add the configured exports to the nfs_shares file:

      172.24.49.21:/gold_export
      172.24.49.21:/silver_platinum
      172.24.49.22:/silver_export
      172.24.49.23:/bronze_export
      
    4. Register a volume type with cinder and associate it with this backend:

      $cinder type-create hnas_nfs_gold
      $cinder type-key hnas_nfs_gold set volume_backend_name=hnas_nfs_backend service_label=nfs_gold
      $cinder type-create hnas_nfs_platinum
      $cinder type-key hnas_nfs_platinum set  volume_backend_name=hnas_nfs_backend service_label=nfs_platinum
      $cinder type-create hnas_nfs_silver
      $cinder type-key hnas_nfs_silver set volume_backend_name=hnas_nfs_backend service_label=nfs_silver
      $cinder type-create hnas_nfs_bronze
      $cinder type-key hnas_nfs_bronze set volume_backend_name=hnas_nfs_backend service_label=nfs_bronze
      
  2. HNAS iSCSI Driver

    1. For HNAS iSCSI driver, create this section in your cinder.conf file:

      [hnas-iscsi]
      volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
      volume_backend_name = hnas_iscsi_backend
      hnas_username = supervisor
      hnas_password = supervisor
      hnas_mgmt_ip0 = 172.24.44.15
      hnas_chap_enabled = True
      
      hnas_svc0_volume_type = iscsi_gold
      hnas_svc0_hdp = FS-gold
      hnas_svc0_iscsi_ip = 172.24.49.21
      
      hnas_svc1_volume_type = iscsi_platinum
      hnas_svc1_hdp = FS-platinum
      hnas_svc1_iscsi_ip = 172.24.49.21
      
      hnas_svc2_volume_type = iscsi_silver
      hnas_svc2_hdp = FS-silver
      hnas_svc2_iscsi_ip = 172.24.49.22
      
      hnas_svc3_volume_type = iscsi_bronze
      hnas_svc3_hdp = FS-bronze
      hnas_svc3_iscsi_ip = 172.24.49.23
      
    2. Add it to the enabled_backends list, under the DEFAULT section of your cinder.conf file:

      [DEFAULT]
      enabled_backends = hnas-nfs, hnas-iscsi
      
    3. Register a volume type with cinder and associate it with this backend:

      $cinder type-create hnas_iscsi_gold
      $cinder type-key hnas_iscsi_gold set volume_backend_name=hnas_iscsi_backend service_label=iscsi_gold
      $cinder type-create hnas_iscsi_platinum
      $cinder type-key hnas_iscsi_platinum set volume_backend_name=hnas_iscsi_backend service_label=iscsi_platinum
      $cinder type-create hnas_iscsi_silver
      $cinder type-key hnas_iscsi_silver set volume_backend_name=hnas_iscsi_backend service_label=iscsi_silver
      $cinder type-create hnas_iscsi_bronze
      $cinder type-key hnas_iscsi_bronze set volume_backend_name=hnas_iscsi_backend service_label=iscsi_bronze
      
Additional notes and limitations
  • The get_volume_stats() function always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels.

  • After changing the configuration on the storage node, the Block Storage driver must be restarted.

  • On Red Hat, if the system is configured to use SELinux, you need to set virt_use_nfs = on for NFS driver work properly.

    # setsebool -P virt_use_nfs on
    
  • It is not possible to manage a volume if there is a slash (/) or a colon (:) in the volume name.

  • File system auto-expansion: Although supported, we do not recommend using file systems with auto-expansion setting enabled because the scheduler uses the file system capacity reported by the driver to determine if new volumes can be created. For instance, in a setup with a file system that can expand to 200GB but is at 100GB capacity, with 10GB free, the scheduler will not allow a 15GB volume to be created. In this case, manual expansion would have to be triggered by an administrator. We recommend always creating the file system at the maximum capacity or periodically expanding the file system manually.

  • iSCSI driver limitations: The iSCSI driver has a limit of 1024 volumes attached to instances.

  • The hnas_svcX_volume_type option must be unique for a given back end.

  • SSC simultaneous connections limit: In very busy environments, if 2 or more volume hosts are configured to use the same storage, some requests (create, delete and so on) can have some attempts failed and re-tried ( 5 attempts by default) due to an HNAS connection limitation ( max of 5 simultaneous connections).

Hitachi storage volume driver

Hitachi storage volume driver provides iSCSI and Fibre Channel support for Hitachi storages.

System requirements

Supported storages:

  • Hitachi Virtual Storage Platform G1000 (VSP G1000)
  • Hitachi Virtual Storage Platform (VSP)
  • Hitachi Unified Storage VM (HUS VM)
  • Hitachi Unified Storage 100 Family (HUS 100 Family)

Required software:

  • RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM

  • Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family

    Note

    HSNM2 needs to be installed under /usr/stonavm.

Required licenses:

  • Hitachi In-System Replication Software for VSP G1000/VSP/HUS VM
  • (Mandatory) ShadowImage in-system replication for HUS 100 Family
  • (Optional) Copy-on-Write Snapshot for HUS 100 Family

Additionally, the pexpect package is required.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Manage and unmanage volume snapshots.
  • Create a volume from a snapshot.
  • Copy a volume to an image.
  • Copy an image to a volume.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
Configuration
Set up Hitachi storage

You need to specify settings as described below. For details about each step, see the user’s guide of the storage device. Use a storage administrative software such as Storage Navigator to set up the storage device so that LDEVs and host groups can be created and deleted, and LDEVs can be connected to the server and can be asynchronously copied.

  1. Create a Dynamic Provisioning pool.
  2. Connect the ports at the storage to the controller node and compute nodes.
  3. For VSP G1000/VSP/HUS VM, set port security to enable for the ports at the storage.
  4. For HUS 100 Family, set Host Group security or iSCSI target security to ON for the ports at the storage.
  5. For the ports at the storage, create host groups (iSCSI targets) whose names begin with HBSD- for the controller node and each compute node. Then register a WWN (initiator IQN) for each of the controller node and compute nodes.
  6. For VSP G1000/VSP/HUS VM, perform the following:
    • Create a storage device account belonging to the Administrator User Group. (To use multiple storage devices, create the same account name for all the target storage devices, and specify the same resource group and permissions.)
    • Create a command device (In-Band), and set user authentication to ON.
    • Register the created command device to the host group for the controller node.
    • To use the Thin Image function, create a pool for Thin Image.
  7. For HUS 100 Family, perform the following:
    • Use the auunitaddauto command to register the unit name and controller of the storage device to HSNM2.
    • When connecting via iSCSI, if you are using CHAP certification, specify the same user and password as that used for the storage port.
Set up Hitachi Gigabit Fibre Channel adaptor

Change a parameter of the hfcldd driver and update the initram file if Hitachi Gigabit Fibre Channel adaptor is used:

# /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
# dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
# reboot
Set up Hitachi storage volume driver
  1. Create a directory:

    # mkdir /var/lock/hbsd
    # chown cinder:cinder /var/lock/hbsd
    
  2. Create volume type and volume key.

    This example shows that HUS100_SAMPLE is created as volume type and hus100_backend is registered as volume key:

    $ cinder type-create HUS100_SAMPLE
    $ cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
    
  3. Specify any identical volume type name and volume key.

    To confirm the created volume type, please execute the following command:

    $ cinder extra-specs-list
    
  4. Edit the /etc/cinder/cinder.conf file as follows.

    If you use Fibre Channel:

    volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
    

    If you use iSCSI:

    volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
    

    Also, set volume_backend_name created by cinder type-key command:

    volume_backend_name = hus100_backend
    

    This table shows configuration options for Hitachi storage volume driver.

    Description of Hitachi storage volume driver configuration options
    Configuration option = Default value Description
    [DEFAULT]  
    hitachi_add_chap_user = False (Boolean) Add CHAP user
    hitachi_async_copy_check_interval = 10 (Integer) Interval to check copy asynchronously
    hitachi_auth_method = None (String) iSCSI authentication method
    hitachi_auth_password = HBSD-CHAP-password (String) iSCSI authentication password
    hitachi_auth_user = HBSD-CHAP-user (String) iSCSI authentication username
    hitachi_copy_check_interval = 3 (Integer) Interval to check copy
    hitachi_copy_speed = 3 (Integer) Copy speed of storage system
    hitachi_default_copy_method = FULL (String) Default copy method of storage system
    hitachi_group_range = None (String) Range of group number
    hitachi_group_request = False (Boolean) Request for creating HostGroup or iSCSI Target
    hitachi_horcm_add_conf = True (Boolean) Add to HORCM configuration
    hitachi_horcm_numbers = 200,201 (String) Instance numbers for HORCM
    hitachi_horcm_password = None (String) Password of storage system for HORCM
    hitachi_horcm_resource_lock_timeout = 600 (Integer) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200.
    hitachi_horcm_user = None (String) Username of storage system for HORCM
    hitachi_ldev_range = None (String) Range of logical device of storage system
    hitachi_pool_id = None (Integer) Pool ID of storage system
    hitachi_serial_number = None (String) Serial number of storage system
    hitachi_target_ports = None (String) Control port names for HostGroup or iSCSI Target
    hitachi_thin_pool_id = None (Integer) Thin pool ID of storage system
    hitachi_unit_name = None (String) Name of an array unit
    hitachi_zoning_request = False (Boolean) Request for FC Zone creating HostGroup
    hnas_chap_enabled = True (Boolean) Whether the chap authentication is enabled in the iSCSI target or not.
    hnas_cluster_admin_ip0 = None (String) The IP of the HNAS cluster admin. Required only for HNAS multi-cluster setups.
    hnas_mgmt_ip0 = None (IP) Management IP address of HNAS. This can be any IP in the admin address on HNAS or the SMU IP.
    hnas_password = None (String) HNAS password.
    hnas_ssc_cmd = ssc (String) Command to communicate to HNAS.
    hnas_ssh_port = 22 (Port number) Port to be used for SSH authentication.
    hnas_ssh_private_key = None (String) Path to the SSH private key used to authenticate in HNAS SMU.
    hnas_svc0_hdp = None (String) Service 0 HDP
    hnas_svc0_iscsi_ip = None (IP) Service 0 iSCSI IP
    hnas_svc0_volume_type = None (String) Service 0 volume type
    hnas_svc1_hdp = None (String) Service 1 HDP
    hnas_svc1_iscsi_ip = None (IP) Service 1 iSCSI IP
    hnas_svc1_volume_type = None (String) Service 1 volume type
    hnas_svc2_hdp = None (String) Service 2 HDP
    hnas_svc2_iscsi_ip = None (IP) Service 2 iSCSI IP
    hnas_svc2_volume_type = None (String) Service 2 volume type
    hnas_svc3_hdp = None (String) Service 3 HDP
    hnas_svc3_iscsi_ip = None (IP) Service 3 iSCSI IP
    hnas_svc3_volume_type = None (String) Service 3 volume type
    hnas_username = None (String) HNAS username.
  5. Restart the Block Storage service.

    When the startup is done, “MSGID0003-I: The storage backend can be used.” is output into /var/log/cinder/volume.log as follows:

    2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi.
    hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None]
    MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)
    
HPE 3PAR Fibre Channel and iSCSI drivers

The HPE3PARFCDriver and HPE3PARISCSIDriver drivers, which are based on the Block Storage service (Cinder) plug-in architecture, run volume operations by communicating with the HPE 3PAR storage system over HTTP, HTTPS, and SSH connections. The HTTP and HTTPS communications use python-3parclient, which is part of the Python standard library.

For information about how to manage HPE 3PAR storage systems, see the HPE 3PAR user documentation.

System requirements

To use the HPE 3PAR drivers, install the following software and components on the HPE 3PAR storage system:

  • HPE 3PAR Operating System software version 3.1.3 MU1 or higher.
    • Deduplication provisioning requires SSD disks and HPE 3PAR Operating System software version 3.2.1 MU1 or higher.
    • Enabling Flash Cache Policy requires the following:
      • Array must contain SSD disks.
      • HPE 3PAR Operating System software version 3.2.1 MU2 or higher.
      • python-3parclient version 4.2.0 or newer.
      • Array must have the Adaptive Flash Cache license installed.
      • Flash Cache must be enabled on the array with the CLI command createflashcache SIZE, where size must be in 16 GB increments. For example, createflashcache 128g will create 128 GB of Flash Cache for each node pair in the array.
    • The Dynamic Optimization license is required to support any feature that results in a volume changing provisioning type or CPG. This may apply to the volume migrate, retype and manage commands.
    • The Virtual Copy License is required to support any feature that involves volume snapshots. This applies to the volume snapshot-* commands.
  • HPE 3PAR drivers will now check the licenses installed on the array and disable driver capabilities based on available licenses. This will apply to thin provisioning, QoS support and volume replication.
  • HPE 3PAR Web Services API Server must be enabled and running.
  • One Common Provisioning Group (CPG).
  • Additionally, you must install the python-3parclient version 4.2.0 or newer from the Python standard library on the system with the enabled Block Storage service volume drivers.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Replicate host volumes.
  • Fail-over host volumes.
  • Fail-back host volumes.
  • Create, delete, update, snapshot, and clone consistency groups.
  • Create and delete consistency group snapshots.
  • Create a consistency group from a consistency group snapshot or another group.

Volume type support for both HPE 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.types_extra_specs volume type extra specs extension module:

  • hpe3par:snap_cpg
  • hpe3par:provisioning
  • hpe3par:persona
  • hpe3par:vvs
  • hpe3par:flash_cache

To work with the default filter scheduler, the key values are case sensitive and scoped with hpe3par:. For information about how to set the key-value pairs and associate them with a volume type, run the following command:

$ cinder help type-key

Note

Volumes that are cloned only support the extra specs keys cpg, snap_cpg, provisioning and vvs. The others are ignored. In addition the comments section of the cloned volume in the HPE 3PAR StoreServ storage array is not populated.

If volume types are not used or a particular key is not set for a volume type, the following defaults are used:

  • hpe3par:cpg - Defaults to the hpe3par_cpg setting in the cinder.conf file.
  • hpe3par:snap_cpg - Defaults to the hpe3par_snap setting in the cinder.conf file. If hpe3par_snap is not set, it defaults to the hpe3par_cpg setting.
  • hpe3par:provisioning - Defaults to thin provisioning, the valid values are thin, full, and dedup.
  • hpe3par:persona - Defaults to the 2 - Generic-ALUA persona. The valid values are:
    • 1 - Generic
    • 2 - Generic-ALUA
    • 3 - Generic-legacy
    • 4 - HPUX-legacy
    • 5 - AIX-legacy
    • 6 - EGENERA
    • 7 - ONTAP-legacy
    • 8 - VMware
    • 9 - OpenVMS
    • 10 - HPUX
    • 11 - WindowsServer
  • hpe3par:flash_cache - Defaults to false, the valid values are true and false.

QoS support for both HPE 3PAR drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage qos specs extension module:

  • minBWS
  • maxBWS
  • minIOPS
  • maxIOPS
  • latency
  • priority

The qos keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate

The following keys require that the HPE 3PAR StoreServ storage array has a Priority Optimization license installed.

hpe3par:vvs
The virtual volume set name that has been predefined by the Administrator with quality of service (QoS) rules associated to it. If you specify extra_specs hpe3par:vvs, the qos_specs minIOPS, maxIOPS, minBWS, and maxBWS settings are ignored.
minBWS
The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue bandwidth rate has no minimum goal.
maxBWS
The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit.
minIOPS
The QoS I/O issue count minimum goal. If not set, the I/O issue count has no minimum goal.
maxIOPS
The QoS I/O issue count rate limit. If not set, the I/O issue count rate has no limit.
latency
The latency goal in milliseconds.
priority
The priority of the QoS rule over other rules. If not set, the priority is normal, valid values are low, normal and high.

Note

Since the Icehouse release, minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used together. If only one is set the other will be set to the same value.

The following key requires that the HPE 3PAR StoreServ storage array has an Adaptive Flash Cache license installed.

  • hpe3par:flash_cache - The flash-cache policy, which can be turned on and off by setting the value to true or false.

LDAP authentication is supported if the 3PAR is configured to do so.

Enable the HPE 3PAR Fibre Channel and iSCSI drivers

The HPE3PARFCDriver and HPE3PARISCSIDriver are installed with the OpenStack software.

  1. Install the python-3parclient Python package on the OpenStack Block Storage system.

    $ pip install 'python-3parclient>=4.0,<5.0'
    
  2. Verify that the HPE 3PAR Web Services API server is enabled and running on the HPE 3PAR storage system.

    1. Log onto the HP 3PAR storage system with administrator access.

      $ ssh 3paradm@<HP 3PAR IP Address>
      
    2. View the current state of the Web Services API Server.

      $ showwsapi
      -Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version-
      Enabled   Active Enabled       8008        Enabled       8080       1.1
      
    3. If the Web Services API Server is disabled, start it.

      $ startwsapi
      
  3. If the HTTP or HTTPS state is disabled, enable one of them.

    $ setwsapi -http enable
    

    or

    $ setwsapi -https enable
    

    Note

    To stop the Web Services API Server, use the stopwsapi command. For other options run the setwsapi –h command.

  4. If you are not using an existing CPG, create a CPG on the HPE 3PAR storage system to be used as the default location for creating volumes.

  5. Make the following changes in the /etc/cinder/cinder.conf file.

    # 3PAR WS API Server URL
    hpe3par_api_url=https://10.10.0.141:8080/api/v1
    
    # 3PAR username with the 'edit' role
    hpe3par_username=edit3par
    
    # 3PAR password for the user specified in hpe3par_username
    hpe3par_password=3parpass
    
    # 3PAR CPG to use for volume creation
    hpe3par_cpg=OpenStackCPG_RAID5_NL
    
    # IP address of SAN controller for SSH access to the array
    san_ip=10.10.22.241
    
    # Username for SAN controller for SSH access to the array
    san_login=3paradm
    
    # Password for SAN controller for SSH access to the array
    san_password=3parpass
    
    # FIBRE CHANNEL(uncomment the next line to enable the FC driver)
    # volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
    
    # iSCSI (uncomment the next line to enable the iSCSI driver and
    # hpe3par_iscsi_ips or iscsi_ip_address)
    #volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
    
    # iSCSI multiple port configuration
    # hpe3par_iscsi_ips=10.10.220.253:3261,10.10.222.234
    
    # Still available for single port iSCSI configuration
    #iscsi_ip_address=10.10.220.253
    
    
    # Enable HTTP debugging to 3PAR
    hpe3par_debug=False
    
    # Enable CHAP authentication for iSCSI connections.
    hpe3par_iscsi_chap_enabled=false
    
    # The CPG to use for Snapshots for volumes. If empty hpe3par_cpg will be
    # used.
    hpe3par_snap_cpg=OpenStackSNAP_CPG
    
    # Time in hours to retain a snapshot. You can't delete it before this
    # expires.
    hpe3par_snapshot_retention=48
    
    # Time in hours when a snapshot expires and is deleted. This must be
    # larger than retention.
    hpe3par_snapshot_expiration=72
    
    # The ratio of oversubscription when thin provisioned volumes are
    # involved. Default ratio is 20.0, this means that a provisioned
    # capacity can be 20 times of the total physical capacity.
    max_over_subscription_ratio=20.0
    
    # This flag represents the percentage of reserved back-end capacity.
    reserved_percentage=15
    

    Note

    You can enable only one driver on each cinder instance unless you enable multiple back-end support. See the Cinder multiple back-end support instructions to enable this feature.

    Note

    You can configure one or more iSCSI addresses by using the hpe3par_iscsi_ips option. Separate multiple IP addresses with a comma (,). When you configure multiple addresses, the driver selects the iSCSI port with the fewest active volumes at attach time. The 3PAR array does not allow the default port 3260 to be changed, so IP ports need not be specified.

  6. Save the changes to the cinder.conf file and restart the cinder-volume service.

The HPE 3PAR Fibre Channel and iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.

The following table contains all the configuration options supported by the HPE 3PAR Fibre Channel and iSCSI drivers.

Description of HPE 3PAR Fibre Channel and iSCSI drivers configuration options
Configuration option = Default value Description
[DEFAULT]  
hpe3par_api_url = (String) 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1
hpe3par_cpg = OpenStack (List) List of the CPG(s) to use for volume creation
hpe3par_cpg_snap = (String) The CPG to use for Snapshots for volumes. If empty the userCPG will be used.
hpe3par_debug = False (Boolean) Enable HTTP debugging to 3PAR
hpe3par_iscsi_chap_enabled = False (Boolean) Enable CHAP authentication for iSCSI connections.
hpe3par_iscsi_ips = (List) List of target iSCSI addresses to use.
hpe3par_password = (String) 3PAR password for the user specified in hpe3par_username
hpe3par_snapshot_expiration = (String) The time in hours when a snapshot expires and is deleted. This must be larger than expiration
hpe3par_snapshot_retention = (String) The time in hours to retain a snapshot. You can’t delete it before this expires.
hpe3par_username = (String) 3PAR username with the ‘edit’ role
HPE LeftHand/StoreVirtual driver

The HPELeftHandISCSIDriver is based on the Block Storage service plug-in architecture. Volume operations are run by communicating with the HPE LeftHand/StoreVirtual system over HTTPS, or SSH connections. HTTPS communications use the python-lefthandclient, which is part of the Python standard library.

The HPELeftHandISCSIDriver can be configured to run using a REST client to communicate with the array. For performance improvements and new functionality the python-lefthandclient must be downloaded, and HP LeftHand/StoreVirtual Operating System software version 11.5 or higher is required on the array. To configure the driver in standard mode, see HPE LeftHand/StoreVirtual REST driver.

For information about how to manage HPE LeftHand/StoreVirtual storage systems, see the HPE LeftHand/StoreVirtual user documentation.

HPE LeftHand/StoreVirtual REST driver

This section describes how to configure the HPE LeftHand/StoreVirtual Block Storage driver.

System requirements

To use the HPE LeftHand/StoreVirtual driver, do the following:

  • Install LeftHand/StoreVirtual Operating System software version 11.5 or higher on the HPE LeftHand/StoreVirtual storage system.
  • Create a cluster group.
  • Install the python-lefthandclient version 2.1.0 from the Python Package Index on the system with the enabled Block Storage service volume drivers.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Replicate host volumes.
  • Fail-over host volumes.
  • Fail-back host volumes.
  • Create, delete, update, and snapshot consistency groups.

When you use back end assisted volume migration, both source and destination clusters must be in the same HPE LeftHand/StoreVirtual management group. The HPE LeftHand/StoreVirtual array will use native LeftHand APIs to migrate the volume. The volume cannot be attached or have snapshots to migrate.

Volume type support for the driver includes the ability to set the following capabilities in the Block Storage API cinder.api.contrib.types_extra_specs volume type extra specs extension module.

  • hpelh:provisioning
  • hpelh:ao
  • hpelh:data_pl

To work with the default filter scheduler, the key-value pairs are case-sensitive and scoped with hpelh:. For information about how to set the key-value pairs and associate them with a volume type, run the following command:

$ cinder help type-key
  • The following keys require the HPE LeftHand/StoreVirtual storage array be configured for:

    hpelh:ao

    The HPE LeftHand/StoreVirtual storage array must be configured for Adaptive Optimization.

    hpelh:data_pl

    The HPE LeftHand/StoreVirtual storage array must be able to support the Data Protection level specified by the extra spec.

  • If volume types are not used or a particular key is not set for a volume type, the following defaults are used:

    hpelh:provisioning

    Defaults to thin provisioning, the valid values are, thin and full

    hpelh:ao

    Defaults to true, the valid values are, true and false.

    hpelh:data_pl

    Defaults to r-0, Network RAID-0 (None), the valid values are,

    • r-0, Network RAID-0 (None)
    • r-5, Network RAID-5 (Single Parity)
    • r-10-2, Network RAID-10 (2-Way Mirror)
    • r-10-3, Network RAID-10 (3-Way Mirror)
    • r-10-4, Network RAID-10 (4-Way Mirror)
    • r-6, Network RAID-6 (Dual Parity)
Enable the HPE LeftHand/StoreVirtual iSCSI driver

The HPELeftHandISCSIDriver is installed with the OpenStack software.

  1. Install the python-lefthandclient Python package on the OpenStack Block Storage system.

    $ pip install 'python-lefthandclient>=2.1,<3.0'
    
  2. If you are not using an existing cluster, create a cluster on the HPE LeftHand storage system to be used as the cluster for creating volumes.

  3. Make the following changes in the /etc/cinder/cinder.conf file:

    # LeftHand WS API Server URL
    hpelefthand_api_url=https://10.10.0.141:8081/lhos
    
    # LeftHand Super user username
    hpelefthand_username=lhuser
    
    # LeftHand Super user password
    hpelefthand_password=lhpass
    
    # LeftHand cluster to use for volume creation
    hpelefthand_clustername=ClusterLefthand
    
    # LeftHand iSCSI driver
    volume_driver=cinder.volume.drivers.hpe.hpe_lefthand_iscsi.HPELeftHandISCSIDriver
    
    # Should CHAPS authentication be used (default=false)
    hpelefthand_iscsi_chap_enabled=false
    
    # Enable HTTP debugging to LeftHand (default=false)
    hpelefthand_debug=false
    
    # The ratio of oversubscription when thin provisioned volumes are
    # involved. Default ratio is 20.0, this means that a provisioned capacity
    # can be 20 times of the total physical capacity.
    max_over_subscription_ratio=20.0
    
    # This flag represents the percentage of reserved back-end capacity.
    reserved_percentage=15
    

    You can enable only one driver on each cinder instance unless you enable multiple back end support. See the Cinder multiple back end support instructions to enable this feature.

    If the hpelefthand_iscsi_chap_enabled is set to true, the driver will associate randomly-generated CHAP secrets with all hosts on the HPE LeftHand/StoreVirtual system. OpenStack Compute nodes use these secrets when creating iSCSI connections.

    Important

    CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.

    Note

    CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.

  4. Save the changes to the cinder.conf file and restart the cinder-volume service.

The HPE LeftHand/StoreVirtual driver is now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.

Note

Previous versions implement a HPE LeftHand/StoreVirtual CLIQ driver that enable the Block Storage service driver configuration in legacy mode. This is removed from Mitaka onwards.

HP MSA Fibre Channel and iSCSI drivers

The HPMSAFCDriver and HPMSAISCSIDriver Cinder drivers allow HP MSA 2040 or 1040 arrays to be used for Block Storage in OpenStack deployments.

System requirements

To use the HP MSA drivers, the following are required:

  • HP MSA 2040 or 1040 array with:
    • iSCSI or FC host interfaces
    • G22x firmware or later
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
Configuring the array
  1. Verify that the array can be managed via an HTTPS connection. HTTP can also be used if hpmsa_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

    If you plan to use vdisks instead of virtual pools, create or identify one or more vdisks to be used for OpenStack storage; typically this will mean creating or setting aside one disk group for each of the A and B controllers.

  2. Edit the cinder.conf file to define a storage back end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in a key=value format.

    • The hpmsa_backend_name value specifies the name of the storage pool or vdisk on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that is shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: the appropriate Cinder driver name; IP address or host name of the array management interface; the username and password of an array user account with manage privileges; and the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name is used so that a single volume type definition can be used to allocate volumes from both pools.

    iSCSI example back-end entries

    [pool-a]
    hpmsa_backend_name = A
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    hpmsa_backend_name = B
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Fibre Channel example back-end entries

    [pool-a]
    hpmsa_backend_name = A
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
    [pool-b]
    hpmsa_backend_name = B
    volume_backend_name = hpmsa-array
    volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
  3. If any volume_backend_name value refers to a vdisk rather than a virtual pool, add an additional statement hpmsa_backend_type = linear to that back end entry.

  4. If HTTPS is not enabled in the array, include hpmsa_api_protocol = http in each of the back-end definitions.

  5. If HTTPS is enabled, you can enable certificate verification with the option hpmsa_verify_certificate=True. You may also use the hpmsa_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  6. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_back-ends parameter specifying the backend entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example of [DEFAULT] section changes

    [DEFAULT]
    enabled_backends = pool-a,pool-b
    default_volume_type = hpmsa
    
  7. Create a new volume type for each distinct volume_backend_name value that you added in the cinder.conf file. The example below assumes that the same volume_backend_name=hpmsa-array option was specified in all of the entries, and specifies that the volume type hpmsa can be used to allocate volumes from any of them.

    Example of creating a volume type

    $ cinder type-create hpmsa
    $ cinder type-key hpmsa set volume_backend_name=hpmsa-array
    
  8. After modifying the cinder.conf file, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the HP MSA drivers.

Description of HP MSA volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hpmsa_api_protocol = https (String) HPMSA API interface protocol.
hpmsa_backend_name = A (String) Pool or Vdisk name to use for volume creation.
hpmsa_backend_type = virtual (String) linear (for Vdisk) or virtual (for Pool).
hpmsa_iscsi_ips = (List) List of comma-separated target iSCSI IP addresses.
hpmsa_verify_certificate = False (Boolean) Whether to verify HPMSA array SSL certificate.
hpmsa_verify_certificate_path = None (String) HPMSA array SSL certificate path.
Huawei volume driver

Huawei volume driver can be used to provide functions such as the logical volume and snapshot for virtual machines (VMs) in the OpenStack Block Storage driver that supports iSCSI and Fibre Channel protocols.

Version mappings

The following table describes the version mappings among the Block Storage driver, Huawei storage system and OpenStack:

Version mappings among the Block Storage driver and Huawei storage system
Description Storage System Version

Create, delete, expand, attach, detach, manage, and unmanage volumes.

Create, delete, manage, unmanage, and backup a snapshot.

Create, delete, and update a consistency group.

Create and delete a cgsnapshot.

Copy an image to a volume.

Copy a volume to an image.

Create a volume from a snapshot.

Clone a volume.

QoS

OceanStor T series V2R2 C00/C20/C30

OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00

OceanStor 2200V3 V300R005C00

OceanStor 2600V3 V300R005C00

OceanStor 18500/18800 V1R1C00/C20/C30 V3R3C00

Volume Migration

Auto zoning

SmartTier

SmartCache

Smart Thin/Thick

Replication V2.1

OceanStor T series V2R2 C00/C20/C30

OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00

OceanStor 2200V3 V300R005C00

OceanStor 2600V3 V300R005C00

OceanStor 18500/18800V1R1C00/C20/C30

SmartPartition

OceanStor T series V2R2 C00/C20/C30

OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00

OceanStor 2600V3 V300R005C00

OceanStor 18500/18800V1R1C00/C20/C30

Block Storage driver installation and deployment
  1. Before installation, delete all the installation files of Huawei OpenStack Driver. The default path may be: /usr/lib/python2.7/disk-packages/cinder/volume/drivers/huawei.

    Note

    In this example, the version of Python is 2.7. If another version is used, make corresponding changes to the driver path.

  2. Copy the Block Storage driver to the Block Storage driver installation directory. Refer to step 1 to find the default directory.

  3. Refer to chapter Volume driver configuration to complete the configuration.

  4. After configuration, restart the cinder-volume service:

  5. Check the status of services using the cinder service-list command. If the State of cinder-volume is up, that means cinder-volume is okay.

    # cinder service-list
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    | Binary          | Host            | Zone | Status  | State | Updated_at                 | Disabled Reason |
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    | cinderscheduler | controller      | nova | enabled | up    | 2016-02-01T16:26:00.000000 | -               |
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    | cindervolume    | controller@v3r3 | nova | enabled | up    | 2016-02-01T16:25:53.000000 | -               |
    +-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
    
Volume driver configuration

This section describes how to configure the Huawei volume driver for either iSCSI storage or Fibre Channel storage.

Pre-requisites

When creating a volume from image, install the multipath tool and add the following configuration keys in the [DEFAULT] configuration group of the /etc/cinder/cinder.conf file:

use_multipath_for_image_xfer = True
enforce_multipath_for_image_xfer = True

To configure the volume driver, follow the steps below:

  1. In /etc/cinder, create a Huawei-customized driver configuration file. The file format is XML.

  2. Change the name of the driver configuration file based on the site requirements, for example, cinder_huawei_conf.xml.

  3. Configure parameters in the driver configuration file.

    Each product has its own value for the Product parameter under the Storage xml block. The full xml file with the appropriate Product parameter is as below:

      <?xml version="1.0" encoding="UTF-8"?>
         <config>
            <Storage>
               <Product>PRODUCT</Product>
               <Protocol>iSCSI</Protocol>
               <ControllerIP1>x.x.x.x</ControllerIP1>
               <UserName>xxxxxxxx</UserName>
               <UserPassword>xxxxxxxx</UserPassword>
            </Storage>
            <LUN>
               <LUNType>xxx</LUNType>
               <StripUnitSize>xxx</StripUnitSize>
               <WriteType>xxx</WriteType>
               <MirrorSwitch>xxx</MirrorSwitch>
               <Prefetch Type="xxx" Value="xxx" />
               <StoragePool Name="xxx" />
               <StoragePool Name="xxx" />
            </LUN>
            <iSCSI>
               <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
               <Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
            </iSCSI>
            <Host OSType="Linux" HostIP="x.x.x.x, x.x.x.x"/>
         </config>
    
    The corresponding ``Product`` values for each product are as below:
    
    • For T series V2

      <Product>TV2</Product>
      
    • For V3

      <Product>V3</Product>
      
    • For OceanStor 18000 series

      <Product>18000</Product>
      

    The Protocol value to be used is iSCSI for iSCSI and FC for Fibre Channel as shown below:

    # For iSCSI
    <Protocol>iSCSI</Protocol>
    
    # For Fibre channel
    <Protocol>FC</Protocol>
    

    Note

    For details about the parameters in the configuration file, see the Configuration file parameters section.

  4. Configure the cinder.conf file.

    In the [default] block of /etc/cinder/cinder.conf, add the following contents:

    • volume_driver indicates the loaded driver.
    • cinder_huawei_conf_file indicates the specified Huawei-customized configuration file.
    • hypermetro_devices indicates the list of remote storage devices for which Hypermetro is to be used.

    The added content in the [default] block of /etc/cinder/cinder.conf with the appropriate volume_driver and the list of remote storage devices values for each product is as below:

    volume_driver = VOLUME_DRIVER
    cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
    hypermetro_devices = {STORAGE_DEVICE1, STORAGE_DEVICE2....}
    

    Note

    By default, the value for hypermetro_devices is None.

    The volume-driver value for every product is as below:

    # For iSCSI
    volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiISCSIDriver
    
    # For FC
    volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
    
  5. Run the service cinder-volume restart command to restart the Block Storage service.

Configuring iSCSI Multipathing

To configure iSCSI Multipathing, follow the steps below:

  1. Create a port group on the storage device using the DeviceManager and add service links that require multipathing into the port group.

  2. Log in to the storage device using CLI commands and enable the multiport discovery switch in the multipathing.

    developer:/>change iscsi discover_multiport switch=on
    
  3. Add the port group settings in the Huawei-customized driver configuration file and configure the port group name needed by an initiator.

    <iSCSI>
       <DefaultTargetIP>x.x.x.x</DefaultTargetIP>
       <Initiator Name="xxxxxx" TargetPortGroup="xxxx" />
    </iSCSI>
    
  4. Enable the multipathing switch of the Compute service module.

    Add iscsi_use_multipath = True in [libvirt] of /etc/nova/nova.conf.

  5. Run the service nova-compute restart command to restart the nova-compute service.

Configuring CHAP and ALUA

On a public network, any application server whose IP address resides on the same network segment as that of the storage systems iSCSI host port can access the storage system and perform read and write operations in it. This poses risks to the data security of the storage system. To ensure the storage systems access security, you can configure CHAP authentication to control application servers access to the storage system.

Adjust the driver configuration file as follows:

<Initiator ALUA="xxx" CHAPinfo="xxx" Name="xxx" TargetIP="x.x.x.x"/>

ALUA indicates a multipathing mode. 0 indicates that ALUA is disabled. 1 indicates that ALUA is enabled. CHAPinfo indicates the user name and password authenticated by CHAP. The format is mmuser; mm-user@storage. The user name and password are separated by semicolons (;).

Configuring multiple storage

Multiple storage systems configuration example:

enabled_backends = v3_fc, 18000_fc
[v3_fc]
volume_driver = cinder.volume.drivers.huawei.huawei_t.HuaweiFCDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_v3_fc.xml
volume_backend_name = HuaweiTFCDriver
[18000_fc]
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_fc.xml
volume_backend_name = HuaweiFCDriver
Configuration file parameters

This section describes mandatory and optional configuration file parameters of the Huawei volume driver.

Mandatory parameters
Parameter Default value Description Applicable to
Product - Type of a storage product. Possible values are TV2, 18000 and V3. All
Protocol - Type of a connection protocol. The possible value is either 'iSCSI' or 'FC'. All
RestURL - Access address of the REST interface, https://x.x.x.x/devicemanager/rest/. The value x.x.x.x indicates the management IP address. OceanStor 18000 uses the preceding setting, and V2 and V3 requires you to add port number 8088, for example, https://x.x.x.x:8088/deviceManager/rest/. If you need to configure multiple RestURL, separate them by semicolons (;).

T series V2

V3 18000

UserName - User name of a storage administrator. All
UserPassword - Password of a storage administrator. All
StoragePool - Name of a storage pool to be used. If you need to configure multiple storage pools, separate them by semicolons (;). All

Note

The value of StoragePool cannot contain Chinese characters.

Optional parameters
Parameter Default value Description Applicable to
LUNType Thin Type of the LUNs to be created. The value can be Thick or Thin. All
WriteType 1 Cache write type, possible values are: 1 (write back), 2 (write through), and 3 (mandatory write back). All
MirrorSwitch 1 Cache mirroring or not, possible values are: 0 (without mirroring) or 1 (with mirroring). All
LUNcopyWaitInterval 5 After LUN copy is enabled, the plug-in frequently queries the copy progress. You can set a value to specify the query interval.

T series V2 V3

18000

Timeout 432000 Timeout interval for waiting LUN copy of a storage device to complete. The unit is second.

T series V2 V3

18000

Initiator Name - Name of a compute node initiator. All
Initiator TargetIP - IP address of the iSCSI port provided for compute nodes. All
Initiator TargetPortGroup - IP address of the iSCSI target port that is provided for compute nodes.

T series V2 V3

18000

DefaultTargetIP - Default IP address of the iSCSI target port that is provided for compute nodes. All
OSType Linux Operating system of the Nova compute node’s host. All
HostIP - IP address of the Nova compute node’s host. All

Important

The Initiator Name, Initiator TargetIP, and Initiator TargetPortGroup are ISCSI parameters and therefore not applicable to FC.

IBM GPFS volume driver

IBM General Parallel File System (GPFS) is a cluster file system that provides concurrent access to file systems from multiple nodes. The storage provided by these nodes can be direct attached, network attached, SAN attached, or a combination of these methods. GPFS provides many features beyond common data access, including data replication, policy based storage management, and space efficient file snapshot and clone operations.

How the GPFS driver works

The GPFS driver enables the use of GPFS in a fashion similar to that of the NFS driver. With the GPFS driver, instances do not actually access a storage device at the block level. Instead, volume backing files are created in a GPFS file system and mapped to instances, which emulate a block device.

Note

GPFS software must be installed and running on nodes where Block Storage and Compute services run in the OpenStack environment. A GPFS file system must also be created and mounted on these nodes before starting the cinder-volume service. The details of these GPFS specific steps are covered in GPFS: Concepts, Planning, and Installation Guide and GPFS: Administration and Programming Reference.

Optionally, the Image service can be configured to store images on a GPFS file system. When a Block Storage volume is created from an image, if both image data and volume data reside in the same GPFS file system, the data from image file is moved efficiently to the volume file using copy-on-write optimization strategy.

Enable the GPFS driver

To use the Block Storage service with the GPFS driver, first set the volume_driver in the cinder.conf file:

volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver

The following table contains the configuration options supported by the GPFS driver.

Note

The gpfs_images_share_mode flag is only valid if the Image Service is configured to use GPFS with the gpfs_images_dir flag. When the value of this flag is copy_on_write, the paths specified by the gpfs_mount_point_base and gpfs_images_dir flags must both reside in the same GPFS file system and in the same GPFS file set.

Volume creation options

It is possible to specify additional volume configuration options on a per-volume basis by specifying volume metadata. The volume is created using the specified options. Changing the metadata after the volume is created has no effect. The following table lists the volume creation options supported by the GPFS volume driver.

Description of GPFS storage configuration options
Configuration option = Default value Description
[DEFAULT]  
gpfs_images_dir = None (String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS.
gpfs_images_share_mode = None (String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: “copy” specifies that a full copy of the image is made; “copy_on_write” specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently.
gpfs_max_clone_depth = 0 (Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth.
gpfs_mount_point_base = None (String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored.
gpfs_sparse_volumes = True (Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time.
gpfs_storage_pool = system (String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used.
nas_host = (String) IP address or Hostname of NAS system.
nas_login = admin (String) User name to connect to NAS system.
nas_password = (String) Password to connect to NAS system.
nas_private_key = (String) Filename of private key to use for SSH authentication.
nas_ssh_port = 22 (Port number) SSH port to use to connect to NAS system.

This example shows the creation of a 50GB volume with an ext4 file system labeled newfs and direct IO enabled:

$ cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50
Operational notes for GPFS driver

Volume snapshots are implemented using the GPFS file clone feature. Whenever a new snapshot is created, the snapshot file is efficiently created as a read-only clone parent of the volume, and the volume file uses copy-on-write optimization strategy to minimize data movement.

Similarly when a new volume is created from a snapshot or from an existing volume, the same approach is taken. The same approach is also used when a new volume is created from an Image service image, if the source image is in raw format, and gpfs_images_share_mode is set to copy_on_write.

The GPFS driver supports encrypted volume back end feature. To encrypt a volume at rest, specify the extra specification gpfs_encryption_rest = True.

IBM Storwize family and SVC volume driver

The volume management driver for Storwize family and SAN Volume Controller (SVC) provides OpenStack Compute instances with access to IBM Storwize family or SVC storage systems.

Configure the Storwize family and SVC system
Network configuration

The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both.

If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address associated with the volume’s preferred node (if available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address directly from the storage system. You do not need to provide these iSCSI IP addresses directly to the driver.

Note

If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize family or SVC system.

If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. The driver uses all available WWPNs to attach the volume to the instance. The driver obtains the WWPNs directly from the storage system. You do not need to provide these WWPNs directly to the driver.

Note

If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC system.

iSCSI CHAP authentication

If using iSCSI for data access and the storwize_svc_iscsi_chap_enabled is set to True, the driver will associate randomly-generated CHAP secrets with all hosts on the Storwize family system. The compute nodes use these secrets when creating iSCSI connections.

Warning

CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.

Note

Not all OpenStack Compute drivers support CHAP authentication. Please check compatibility before using.

Note

CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.

Configure storage pools

The IBM Storwize/SVC driver can allocate volumes in multiple pools. The pools should be created in advance and be provided to the driver using the storwize_svc_volpool_name configuration flag in the form of a comma-separated list. For the complete list of configuration flags, see Storwize family and SVC driver options in cinder.conf.

Configure user authentication for the driver

The driver requires access to the Storwize family or SVC system management interface. The driver communicates with the management using SSH. The driver should be provided with the Storwize family or SVC management IP using the san_ip flag, and the management port should be provided by the san_ssh_port flag. By default, the port value is configured to be port 22 (SSH). Also, you can set the secondary management IP using the storwize_san_secondary_ip flag.

Note

Make sure the compute node running the cinder-volume management driver has SSH network access to the storage system.

To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Please consult with your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.

Note

When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role.

If using password authentication, assign a password to the user on the Storwize or SVC system. The driver configuration flags for the user and password are san_login and san_password, respectively.

If you are using the SSH key pair authentication, create SSH private and public keys using the instructions below or by any other method. Associate the public key with the user by uploading the public key: select the choose file option in the Storwize family or SVC management GUI under SSH public key. Alternatively, you may associate the SSH public key using the command-line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the san_private_key configuration flag.

Create a SSH key pair with OpenSSH

You can create an SSH key pair using OpenSSH, by running:

$ ssh-keygen -t rsa

The command prompts for a file to save the key pair. For example, if you select key as the filename, two files are created: key and key.pub. The key file holds the private SSH key and key.pub holds the public SSH key.

The command also prompts for a pass phrase, which should be empty.

The private key file should be provided to the driver using the san_private_key configuration flag. The public key should be uploaded to the Storwize family or SVC system using the storage management GUI or command-line interface.

Note

Ensure that Cinder has read permissions on the private key file.

Configure the Storwize family and SVC driver
Enable the Storwize family and SVC driver

Set the volume driver to the Storwize family and SVC driver by setting the volume_driver option in the cinder.conf file as follows:

iSCSI:

volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver

FC:

volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
Storwize family and SVC driver options in cinder.conf

The following options specify default values for all volumes. Some can be over-ridden using volume types, which are described below.

List of configuration flags for Storwize storage and SVC driver
Flag name Type Default Description
san_ip Required   Management IP or host name
san_ssh_port Optional 22 Management port
san_login Required   Management login username
san_password Required [1]   Management login password
san_private_key Required   Management login SSH private key
storwize_svc_volpool_name Required   Default pool name for volumes
storwize_svc_vol_rsize Optional 2 Initial physical allocation (percentage) [2]
storwize_svc_vol_warning Optional 0 (disabled) Space allocation warning threshold (percentage)
storwize_svc_vol_autoexpand Optional True Enable or disable volume auto expand [3]
storwize_svc_vol_grainsize Optional 256 Volume grain size in KB
storwize_svc_vol_compression Optional False Enable or disable Real-time Compression [4]
storwize_svc_vol_easytier Optional True Enable or disable Easy Tier [5]
storwize_svc_vol_iogrp Optional 0 The I/O group in which to allocate vdisks
storwize_svc_flashcopy_timeout Optional 120 FlashCopy timeout threshold [6] (seconds)
storwize_svc_iscsi_chap_enabled Optional True Configure CHAP authentication for iSCSI connections
storwize_svc_multihost_enabled Optional True Enable mapping vdisks to multiple hosts [7]
storwize_svc_vol_nofmtdisk Optional False Enable or disable fast format [8]
[1]The authentication requires either a password (san_password) or SSH private key (san_private_key). One must be specified. If both are specified, the driver uses only the SSH private key.
[2]The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1, the driver creates full allocated volumes. More details about the available options are available in the Storwize family and SVC documentation.
[3]Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the –autoexpand flag of the Storwize family and SVC command line interface mkvdisk command.
[4]Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work.
[5]Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
[6]The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes).
[7]This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[8]Defines whether or not the fast formatting of thick-provisioned volumes is disabled at creation. The default value is False and a value of True means that fast format is disabled. Details about this option can be found in the –nofmtdisk flag of the Storwize family and SVC command-line interface mkvdisk command.
Description of IBM Storwise driver configuration options
Configuration option = Default value Description
[DEFAULT]  
storwize_san_secondary_ip = None (String) Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible.
storwize_svc_allow_tenant_qos = False (Boolean) Allow tenants to specify QOS on create
storwize_svc_flashcopy_rate = 50 (Integer) Specifies the Storwize FlashCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-100.
storwize_svc_flashcopy_timeout = 120 (Integer) Maximum number of seconds to wait for FlashCopy to be prepared.
storwize_svc_iscsi_chap_enabled = True (Boolean) Configure CHAP authentication for iSCSI connections (Default: Enabled)
storwize_svc_multihostmap_enabled = True (Boolean) DEPRECATED: This option no longer has any affect. It is deprecated and will be removed in the next release.
storwize_svc_multipath_enabled = False (Boolean) Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
storwize_svc_stretched_cluster_partner = None (String) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: “pool2”
storwize_svc_vol_autoexpand = True (Boolean) Storage system autoexpand parameter for volumes (True/False)
storwize_svc_vol_compression = False (Boolean) Storage system compression option for volumes
storwize_svc_vol_easytier = True (Boolean) Enable Easy Tier for volumes
storwize_svc_vol_grainsize = 256 (Integer) Storage system grain size parameter for volumes (32/64/128/256)
storwize_svc_vol_iogrp = 0 (Integer) The I/O group in which to allocate volumes
storwize_svc_vol_nofmtdisk = False (Boolean) Specifies that the volume not be formatted during creation.
storwize_svc_vol_rsize = 2 (Integer) Storage system space-efficiency parameter for volumes (percentage)
storwize_svc_vol_warning = 0 (Integer) Storage system threshold for volume capacity warnings (percentage)
storwize_svc_volpool_name = volpool (List) Comma separated list of storage system storage pools for volumes.
Placement with volume types

The IBM Storwize/SVC driver exposes capabilities that can be added to the extra specs of volume types, and used by the filter scheduler to determine placement of new volumes. Make sure to prefix these keys with capabilities: to indicate that the scheduler should use them. The following extra specs are supported:

  • capabilities:volume_back-end_name - Specify a specific back-end where the volume should be created. The back-end name is a concatenation of the name of the IBM Storwize/SVC storage system as shown in lssystem, an underscore, and the name of the pool (mdisk group). For example:

    capabilities:volume_back-end_name=myV7000_openstackpool
    
  • capabilities:compression_support - Specify a back-end according to compression support. A value of True should be used to request a back-end that supports compression, and a value of False will request a back-end that does not support compression. If you do not have constraints on compression support, do not set this key. Note that specifying True does not enable compression; it only requests that the volume be placed on a back-end that supports compression. Example syntax:

    capabilities:compression_support='<is> True'
    
  • capabilities:easytier_support - Similar semantics as the compression_support key, but for specifying according to support of the Easy Tier feature. Example syntax:

    capabilities:easytier_support='<is> True'
    
  • capabilities:storage_protocol - Specifies the connection protocol used to attach volumes of this type to instances. Legal values are iSCSI and FC. This extra specs value is used for both placement and setting the protocol used for this volume. In the example syntax, note <in> is used as opposed to <is> which is used in the previous examples.

    capabilities:storage_protocol='<in> FC'
    
Configure per-volume creation options

Volume types can also be used to pass options to the IBM Storwize/SVC driver, which over-ride the default values set in the configuration file. Contrary to the previous examples where the capabilities scope was used to pass parameters to the Cinder scheduler, options can be passed to the IBM Storwize/SVC driver with the drivers scope.

The following extra specs keys are supported by the IBM Storwize/SVC driver:

  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • multipath
  • iogrp

These keys have the same semantics as their counterparts in the configuration file. They are set similarly; for example, rsize=2 or compression=False.

Example: Volume types

In the following example, we create a volume type to specify a controller that supports iSCSI and compression, to use iSCSI when attaching the volume, and to enable compression:

$ cinder type-create compressed
$ cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True

We can then create a 50GB volume using this type:

$ cinder create --display-name "compressed volume" --volume-type compressed 50

Volume types can be used, for example, to provide users with different

  • performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)
  • resiliency levels (such as, allocating volumes in pools with different RAID levels)
  • features (such as, enabling/disabling Real-time Compression)
QOS

The Storwize driver provides QOS support for storage volumes by controlling the I/O amount. QOS is enabled by editing the etc/cinder/cinder.conf file and setting the storwize_svc_allow_tenant_qos to True.

There are three ways to set the Storwize IOThrotting parameter for storage volumes:

  • Add the qos:IOThrottling key into a QOS specification and associate it with a volume type.
  • Add the qos:IOThrottling key into an extra specification with a volume type.
  • Add the qos:IOThrottling key to the storage volume metadata.

Note

If you are changing a volume type with QOS to a new volume type without QOS, the QOS configuration settings will be removed.

Operational notes for the Storwize family and SVC driver
Migrate volumes

In the context of OpenStack Block Storage’s volume migration feature, the IBM Storwize/SVC driver enables the storage’s virtualization technology. When migrating a volume from one pool to another, the volume will appear in the destination pool almost immediately, while the storage moves the data in the background.

Note

To enable this feature, both pools involved in a given volume migration must have the same values for extent_size. If the pools have different values for extent_size, the data will still be moved directly between the pools (not host-side copy), but the operation will be synchronous.

Extend volumes

The IBM Storwize/SVC driver allows for extending a volume’s size, but only for volumes without snapshots.

Snapshots and clones

Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume clones (volumes created from existing volumes) are implemented with FlashCopy, but with background copy enabled. This means that volume clones are independent, full copies. While this background copy is taking place, attempting to delete or extend the source volume will result in that operation waiting for the copy to complete.

Volume retype

The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:

  • rsize
  • warning
  • autoexpand
  • grainsize
  • compression
  • easytier
  • iogrp
  • nofmtdisk

Note

When you change the rsize, grainsize or compression properties, volume copies are asynchronously synchronized on the array.

Note

To change the iogrp property, IBM Storwize/SVC firmware version 6.4.0 or later is required.

IBM Storage volume driver

The IBM Storage Driver for OpenStack is a Block Storage driver that supports IBM XIV, IBM Spectrum Accelerate, IBM FlashSystem A9000, IBM FlashSystem A9000R and IBM DS8000 storage systems over Fiber channel and iSCSI.

Set the following in your cinder.conf file, and use the following options to configure it.

volume_driver = cinder.volume.drivers.ibm.ibm_storage.IBMStorageDriver
Description of IBM Storage driver configuration options
Configuration option = Default value Description
[DEFAULT]  
proxy = storage.proxy.IBMStorageProxy (String) Proxy driver that connects to the IBM Storage Array
san_clustername = (String) Cluster name to use for creating volumes
san_ip = (String) IP address of SAN controller
san_login = admin (String) Username for SAN controller
san_password = (String) Password for SAN controller

Note

To use the IBM Storage Driver for OpenStack you must download and install the package. For more information, see IBM Support Portal - Select Fixes.

For full documentation, see IBM Knowledge Center.

IBM FlashSystem volume driver

The volume driver for FlashSystem provides OpenStack Block Storage hosts with access to IBM FlashSystems.

Configure FlashSystem
Configure storage array

The volume driver requires a pre-defined array. You must create an array on the FlashSystem before using the volume driver. An existing array can also be used and existing data will not be deleted.

Note

FlashSystem can only create one array, so no configuration option is needed for the IBM FlashSystem driver to assign it.

Configure user authentication for the driver

The driver requires access to the FlashSystem management interface using SSH. It should be provided with the FlashSystem management IP using the san_ip flag, and the management port should be provided by the san_ssh_port flag. By default, the port value is configured to be port 22 (SSH).

Note

Make sure the compute node running the cinder-volume driver has SSH network access to the storage system.

Using password authentication, assign a password to the user on the FlashSystem. For more detail, see the driver configuration flags for the user and password here: Enable IBM FlashSystem FC driver or Enable IBM FlashSystem iSCSI driver.

IBM FlashSystem FC driver
Data Path configuration

Using Fiber Channel (FC), each FlashSystem node should have at least one WWPN port configured. If the flashsystem_multipath_enabled flag is set to True in the Block Storage service configuration file, the driver uses all available WWPNs to attach the volume to the instance. If the flag is not set, the driver uses the WWPN associated with the volume’s preferred node (if available). Otherwise, it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system. You do not need to provide these WWPNs to the driver.

Note

Using FC, ensure that the block storage hosts have FC connectivity to the FlashSystem.

Enable IBM FlashSystem FC driver

Set the volume driver to the FlashSystem driver by setting the volume_driver option in the cinder.conf configuration file, as follows:

volume_driver = cinder.volume.drivers.ibm.flashsystem_fc.FlashSystemFCDriver

To enable the IBM FlashSystem FC driver, configure the following options in the cinder.conf configuration file:

List of configuration flags for IBM FlashSystem FC driver
Flag name Type Default Description
san_ip Required   Management IP or host name
san_ssh_port Optional 22 Management port
san_login Required   Management login user name
san_password Required   Management login password
flashsystem_connection_protocol Required   Connection protocol should be set to FC
flashsystem_multipath_enabled Required   Enable multipath for FC connections
flashsystem_multihost_enabled Optional True Enable mapping vdisks to multiple hosts [1]
[1]This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
IBM FlashSystem iSCSI driver
Network configuration

Using iSCSI, each FlashSystem node should have at least one iSCSI port configured. iSCSI IP addresses of IBM FlashSystem can be obtained by FlashSystem GUI or CLI. For more information, see the appropriate IBM Redbook for the FlashSystem.

Note

Using iSCSI, ensure that the compute nodes have iSCSI network access to the IBM FlashSystem.

Enable IBM FlashSystem iSCSI driver

Set the volume driver to the FlashSystem driver by setting the volume_driver option in the cinder.conf configuration file, as follows:

volume_driver = cinder.volume.drivers.ibm.flashsystem_iscsi.FlashSystemISCSIDriver

To enable IBM FlashSystem iSCSI driver, configure the following options in the cinder.conf configuration file:

List of configuration flags for IBM FlashSystem iSCSI driver
Flag name Type Default Description
san_ip Required   Management IP or host name
san_ssh_port Optional 22 Management port
san_login Required   Management login user name
san_password Required   Management login password
flashsystem_connection_protocol Required   Connection protocol should be set to iSCSI
flashsystem_multihost_enabled Optional True Enable mapping vdisks to multiple hosts [2]
iscsi_ip_address Required   Set to one of the iSCSI IP addresses obtained by FlashSystem GUI or CLI [3]
flashsystem_iscsi_portid Required   Set to the id of the iscsi_ip_address obtained by FlashSystem GUI or CLI [4]
[2]This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
[3]On the cluster of the FlashSystem, the iscsi_ip_address column is the seventh column IP_address of the output of lsportip.
[4]On the cluster of the FlashSystem, port ID column is the first column id of the output of lsportip, not the sixth column port_id.
Limitations and known issues

IBM FlashSystem only works when:

open_access_enabled=off
Supported operations

These operations are supported:

  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Get volume statistics.
  • Manage and unmanage a volume.
ITRI DISCO volume driver
Supported operations

The DISCO driver supports the following features:

  • Volume create and delete
  • Volume attach and detach
  • Snapshot create and delete
  • Create volume from snapshot
  • Get volume stats
  • Copy image to volume
  • Copy volume to image
  • Clone volume
  • Extend volume
Configuration options
Description of Disco volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
clone_check_timeout = 3600 (Integer) How long we check whether a clone is finished before we give up
clone_volume_timeout = 680 (Integer) Create clone volume timeout.
disco_client = 127.0.0.1 (IP) The IP of DMS client socket server
disco_client_port = 9898 (Port number) The port to connect DMS client socket server
disco_wsdl_path = /etc/cinder/DISCOService.wsdl (String) Path to the wsdl file to communicate with DISCO request manager
restore_check_timeout = 3600 (Integer) How long we check whether a restore is finished before we give up
retry_interval = 1 (Integer) How long we wait before retrying to get an item detail
Kaminario K2 all-flash array iSCSI and FC OpenStack Block Storage drivers

Kaminario’s K2 all-flash array leverages a unique software-defined architecture that delivers highly valued predictable performance, scalability and cost-efficiency.

Kaminario’s K2 all-flash iSCSI and FC arrays can be used in OpenStack Block Storage for providing block storage using KaminarioISCSIDriver class and KaminarioFCDriver class respectively.

Driver requirements
  • Kaminario’s K2 all-flash iSCSI and/or FC array
  • K2 REST API version >= 2.2.0
  • krest python library should be installed on the Block Storage node using sudo pip install krest
  • The Block Storage Node should also have a data path to the K2 array for the following operations:
    • Create a volume from snapshot
    • Clone a volume
    • Copy volume to image
    • Copy image to volume
    • Retype ‘dedup without replication’<->’nodedup without replication’
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Manage and unmanage a volume.
  • Replicate volume with failover and failback support to K2 array.
Configure Kaminario iSCSI/FC back end
  1. Edit the /etc/cinder/cinder.conf file and define a configuration group for iSCSI/FC back end.

    [DEFAULT]
    enabled_backends = kaminario
    
    # Use DriverFilter in combination of other filters to use 'filter_function'
    # scheduler_default_filters = DriverFilter,CapabilitiesFilter
    
    [kaminario]
    # Management IP of Kaminario K2 All-Flash iSCSI/FC array
    san_ip = 10.0.0.10
    # Management username of Kaminario K2 All-Flash iSCSI/FC array
    san_login = username
    # Management password of Kaminario K2 All-Flash iSCSI/FC array
    san_password = password
    # Enable Kaminario K2 iSCSI/FC driver
    volume_driver = cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
    # volume_driver = cinder.volume.drivers.kaminario.kaminario_fc.KaminarioFCDriver
    
    # Backend name
    volume_backend_name = kaminario
    
    # K2 driver calculates max_oversubscription_ratio on setting below
    # option as True. Default value is False
    # auto_calc_max_oversubscription_ratio = False
    
    # Set a limit on total number of volumes to be created on K2 array, for example:
    # filter_function = "capabilities.total_volumes < 250"
    
    # For replication, replication_device must be set and the replication peer must be configured
    # on the primary and the secondary K2 arrays
    # Syntax:
    #     replication_device = backend_id:<s-array-ip>,login:<s-username>,password:<s-password>,rpo:<value>
    # where:
    #     s-array-ip is the secondary K2 array IP
    #     rpo must be either 60(1 min) or multiple of 300(5 min)
    # Example:
    # replication_device = backend_id:10.0.0.50,login:kaminario,password:kaminario,rpo:300
    
    # Suppress requests library SSL certificate warnings on setting this option as True
    # Default value is 'False'
    # suppress_requests_ssl_warnings = False
    
  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

Driver options

The following table contains the configuration options that are specific to the Kaminario K2 FC and iSCSI Block Storage drivers.

Description of Kaminario volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
kaminario_nodedup_substring = K2-nodedup (String) DEPRECATED: If volume-type name contains this substring nodedup volume will be created, otherwise dedup volume wil be created. This option is deprecated in favour of ‘kaminario:thin_prov_type’ in extra-specs and will be removed in the next release.
Lenovo Fibre Channel and iSCSI drivers

The LenovoFCDriver and LenovoISCSIDriver Cinder drivers allow Lenovo S3200 or S2200 arrays to be used for block storage in OpenStack deployments.

System requirements

To use the Lenovo drivers, the following are required:

  • Lenovo S3200 or S2200 array with:
    • iSCSI or FC host interfaces
    • G22x firmware or later
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
Configuring the array
  1. Verify that the array can be managed using an HTTPS connection. HTTP can also be used if lenovo_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

  2. Edit the cinder.conf file to define a storage back-end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in key=value format.

    • The lenovo_backend_name value specifies the name of the storage pool on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that’s shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: the appropriate Cinder driver name; IP address or host name of the array management interface; the username and password of an array user account with manage privileges; and the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name is used so that a single volume type definition can be used to allocate volumes from both pools.

    Example: iSCSI example back-end entries

    [pool-a]
    lenovo_backend_name = A
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    lenovo_backend_name = B
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Example: Fibre Channel example back-end entries

    [pool-a]
    lenovo_backend_name = A
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
    [pool-b]
    lenovo_backend_name = B
    volume_backend_name = lenovo-array
    volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
  3. If HTTPS is not enabled in the array, include lenovo_api_protocol = http in each of the back-end definitions.

  4. If HTTPS is enabled, you can enable certificate verification with the option lenovo_verify_certificate=True. You may also use the lenovo_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  5. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_backends parameter specifying the back-end entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example: [DEFAULT] section changes

    [DEFAULT]
    ...
    enabled_backends = pool-a,pool-b
    default_volume_type = lenovo
    ...
    
  6. Create a new volume type for each distinct volume_backend_name value that you added to the cinder.conf file. The example below assumes that the same volume_backend_name=lenovo-array option was specified in all of the entries, and specifies that the volume type lenovo can be used to allocate volumes from any of them.

    Example: Creating a volume type

    $ cinder type-create lenovo
    $ cinder type-key lenovo set volume_backend_name=lenovo-array
    
  7. After modifying the cinder.conf file, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the Lenovo drivers.

Description of Lenovo volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
lenovo_api_protocol = https (String) Lenovo api interface protocol.
lenovo_backend_name = A (String) Pool or Vdisk name to use for volume creation.
lenovo_backend_type = virtual (String) linear (for VDisk) or virtual (for Pool).
lenovo_iscsi_ips = (List) List of comma-separated target iSCSI IP addresses.
lenovo_verify_certificate = False (Boolean) Whether to verify Lenovo array SSL certificate.
lenovo_verify_certificate_path = None (String) Lenovo array SSL certificate path.
NetApp unified driver

The NetApp unified driver is a Block Storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. Also, the NetApp unified driver supports over subscription or over provisioning when thin provisioned Block Storage volumes are in use on an E-Series backend. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.

Note

With the Juno release of OpenStack, Block Storage has introduced the concept of storage pools, in which a single Block Storage back end may present one or more logical storage resource pools from which Block Storage will select a storage location when provisioning volumes.

In releases prior to Juno, the NetApp unified driver contained some scheduling logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new Block Storage volume would be placed into.

With the introduction of pools, all scheduling logic is performed completely within the Block Storage scheduler, as each NetApp storage container is directly exposed to the Block Storage scheduler as a storage pool. Previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the Block Storage volume would be provisioned into.

NetApp clustered Data ONTAP storage family

The NetApp clustered Data ONTAP storage family represents a configuration group which provides Compute instances access to clustered Data ONTAP storage systems. At present it can be configured in Block Storage to work with iSCSI and NFS storage protocols.

NetApp iSCSI configuration for clustered Data ONTAP

The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN that can be accessed using the iSCSI protocol.

The iSCSI configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.

Configuration options

Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password

Note

To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Description of NetApp cDOT iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]  
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_lun_space_reservation = enabled (String) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.

Note

If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the Block Storage logs.

Note

The driver supports iSCSI CHAP uni-directional authentication. To enable it, set the use_chap_auth option to True.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp NFS configuration for clustered Data ONTAP

The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.

The NFS configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.

Configuration options

Configure the volume driver, storage family, and storage protocol to NetApp unified driver, clustered Data ONTAP, and NFS respectively by setting the volume_driver, netapp_storage_family, and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Description of NetApp cDOT NFS driver configuration options
Configuration option = Default value Description
[DEFAULT]  
expiry_thres_minutes = 720 (Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_copyoffload_tool_path = None (String) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file.
netapp_host_type = None (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_host_type = None (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_lun_ostype = None (String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vserver = None (String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur.
thres_avl_size_perc_start = 20 (Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.

Note

Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Description of NFS storage configuration options.

Note

If you specify an account in the netapp_login that only has virtual storage server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not work and you may see warnings in the Block Storage logs.

NetApp NFS Copy Offload client

A feature was added in the Icehouse release of the NetApp unified driver that enables Image service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.

The NetApp NFS Copy Offload client can be used in either of the following scenarios:

  • The Image service is configured to store images in an NFS share that is exported from a NetApp FlexVol volume and the destination for the new Block Storage volume will be on an NFS share exported from a different FlexVol volume than the one used by the Image service. Both FlexVols must be located within the same cluster.
  • The source image from the Image service has already been cached in an NFS image cache within a Block Storage back end. The cached image resides on a different FlexVol volume than the destination for the new Block Storage volume. Both FlexVols must be located within the same cluster.

To use this feature, you must configure the Image service, as follows:

  • Set the default_store configuration option to file.
  • Set the filesystem_store_datadir configuration option to the path to the Image service NFS export.
  • Set the show_image_direct_url configuration option to True.
  • Set the show_multiple_locations configuration option to True.
  • Set the filesystem_store_metadata_file configuration option to a metadata file. The metadata file should contain a JSON object that contains the correct information about the NFS export used by the Image service.

To use this feature, you must configure the Block Storage service, as follows:

  • Set the netapp_copyoffload_tool_path configuration option to the path to the NetApp Copy Offload binary.

  • Set the glance_api_version configuration option to 2.

    Important

    This feature requires that:

    • The storage system must have Data ONTAP v8.2 or greater installed.
    • The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client.
    • To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.

Tip

To download the NetApp copy offload binary to be utilized in conjunction with the netapp_copyoffload_tool_path configuration option, please visit the Utility Toolchest page at the NetApp Support portal (login is required).

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp-supported extra specs for clustered Data ONTAP

Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure QoS, mirroring, or compression for a storage back end.

Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the cinder type-key command.

Description of extra specs options for NetApp Unified Driver with Clustered Data ONTAP
Extra spec Type Description
netapp_raid_type String Limit the candidate volume list based on one of the following raid types: raid4, raid_dp.
netapp_disk_type String Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
netapp:qos_policy_group [1] String Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume.
netapp_mirrored Boolean Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirrored [2] Boolean Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup Boolean Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedup Boolean Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression Boolean Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompression Boolean Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
netapp_thick_provisioned Boolean Limit the candidate volume list to only the ones that support thick provisioning on the storage controller.
[1]Please note that this extra spec has a colon (:) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.
[2]In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored) with a value of true, use the corresponding positive-assertion extra spec (for example, netapp_mirrored) with a value of false.
NetApp Data ONTAP operating in 7-Mode storage family

The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides Compute instances access to 7-Mode storage systems. At present it can be configured in Block Storage to work with iSCSI and NFS storage protocols.

NetApp iSCSI configuration for Data ONTAP operating in 7-Mode

The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol.

The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.

Configuration options

Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password

Note

To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Description of NetApp 7-Mode iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]  
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_size_multiplier = 1.2 (Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.

Note

The driver supports iSCSI CHAP uni-directional authentication. To enable it, set the use_chap_auth option to True.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp NFS configuration for Data ONTAP operating in 7-Mode

The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol.

The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.

Configuration options

Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Description of NetApp 7-Mode NFS driver configuration options
Configuration option = Default value Description
[DEFAULT]  
expiry_thres_minutes = 720 (Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share.
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None (String) The storage protocol to be used on the data path with the storage system.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_vfiler = None (String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system.
thres_avl_size_perc_start = 20 (Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned.
thres_avl_size_perc_stop = 60 (Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option.

Note

Additional NetApp NFS configuration options are shared with the generic NFS driver. For a description of these, see Description of NFS storage configuration options.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp E-Series storage family

The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in Block Storage to work with the iSCSI storage protocol.

NetApp iSCSI configuration for E-Series

The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN which can be accessed using the iSCSI protocol.

The iSCSI configuration for E-Series is an interface from Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.

The use of multipath and DM-MP are required when using the Block Storage driver for E-Series. In order for Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:

  • The use_multipath_for_image_xfer option should be set to True in the cinder.conf file within the driver-specific stanza (for example, [myDriver]).
  • The iscsi_use_multipath option should be set to True in the nova.conf file within the [libvirt] stanza.

Configuration options

Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, E-Series, and iSCSI respectively by setting the volume_driver, netapp_storage_family and netapp_storage_protocol options in the cinder.conf file as follows:

volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True

Note

To use the E-Series driver, you must override the default value of netapp_storage_family with eseries.

To use the iSCSI protocol, you must override the default value of netapp_storage_protocol with iscsi.

Description of NetApp E-Series driver configuration options
Configuration option = Default value Description
[DEFAULT]  
netapp_controller_ips = None (String) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning.
netapp_enable_multiattach = False (Boolean) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host.
netapp_host_type = None (String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts.
netapp_login = None (String) Administrative user account name used to access the storage system or proxy server.
netapp_partner_backend_name = None (String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_pool_name_search_pattern = (.+) (String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC.
netapp_replication_aggregate_map = None (Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
netapp_sa_password = None (String) Password for the NetApp E-Series storage array.
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = None (Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server.
netapp_webservice_path = /devmgr/v2 (String) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application.

Tip

For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.

NetApp-supported extra specs for E-Series

Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with an E-Series storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure thin provisioning for a storage back end.

Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the cinder type-key command.

Description of extra specs options for NetApp Unified Driver with E-Series
Extra spec Type Description
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
Upgrading prior NetApp drivers to the NetApp unified driver

NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining an upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers.

Upgraded NetApp drivers

This section describes how to update Block Storage configuration from a pre-Havana release to the unified driver format.

  • NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = iscsi
    
  • NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_cluster
    netapp_storage_protocol = nfs
    
  • NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = iscsi
    
  • NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier):

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
    

    NetApp unified driver configuration:

    volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
    netapp_storage_family = ontap_7mode
    netapp_storage_protocol = nfs
    
Deprecated NetApp drivers

This section lists the NetApp drivers in earlier releases that are deprecated in Havana.

  • NetApp iSCSI driver for clustered Data ONTAP:

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
    
  • NetApp NFS driver for clustered Data ONTAP:

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
    
  • NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller:

    volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
    
  • NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller:

    volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
    

Note

For support information on deprecated NetApp drivers in the Havana release, visit the NetApp OpenStack Deployment and Operations Guide.

Nimble Storage volume driver

Nimble Storage fully integrates with the OpenStack platform through the Nimble Cinder driver, allowing a host to configure and manage Nimble Storage array features through Block Storage interfaces.

Support for the Liberty release is available from Nimble OS 2.3.8 or later.

Supported operations
  • Create, delete, clone, attach, and detach volumes
  • Create and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Extend a volume
  • Get volume statistics
  • Manage and unmanage a volume
  • Enable encryption and default performance policy for a volume-type extra-specs
  • Force backup of an in-use volume.

Note

The Nimble Storage implementation uses iSCSI only. Fibre Channel is not supported.

Nimble Storage driver configuration

Update the file /etc/cinder/cinder.conf with the given configuration.

In case of a basic (single back-end) configuration, add the parameters within the [default] section as follows.

[default]
san_ip = NIMBLE_MGMT_IP
san_login = NIMBLE_USER
san_password = NIMBLE_PASSWORD
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver

In case of multiple back-end configuration, for example, configuration which supports multiple Nimble Storage arrays or a single Nimble Storage array with arrays from other vendors, use the following parameters.

[default]
enabled_backends = Nimble-Cinder

[Nimble-Cinder]
san_ip = NIMBLE_MGMT_IP
san_login = NIMBLE_USER
san_password = NIMBLE_PASSWORD
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver
volume_backend_name = NIMBLE_BACKEND_NAME

In case of multiple back-end configuration, Nimble Storage volume type is created and associated with a back-end name as follows.

Note

Single back-end configuration users do not need to create the volume type.

$ cinder type-create NIMBLE_VOLUME_TYPE
$ cinder type-key NIMBLE_VOLUME_TYPE set volume_backend_name=NIMBLE_BACKEND_NAME

This section explains the variables used above:

NIMBLE_MGMT_IP
Management IP address of Nimble Storage array/group.
NIMBLE_USER
Nimble Storage account login with minimum power user (admin) privilege if RBAC is used.
NIMBLE_PASSWORD
Password of the admin account for nimble array.
NIMBLE_BACKEND_NAME
A volume back-end name which is specified in the cinder.conf file. This is also used while assigning a back-end name to the Nimble volume type.
NIMBLE_VOLUME_TYPE

The Nimble volume-type which is created from the CLI and associated with NIMBLE_BACKEND_NAME.

Note

Restart the cinder-api, cinder-scheduler, and cinder-volume services after updating the cinder.conf file.

Nimble driver extra spec options

The Nimble volume driver also supports the following extra spec options:

‘nimble:encryption’=’yes’
Used to enable encryption for a volume-type.
‘nimble:perfpol-name’=PERF_POL_NAME
PERF_POL_NAME is the name of a performance policy which exists on the Nimble array and should be enabled for every volume in a volume type.
‘nimble:multi-initiator’=’true’
Used to enable multi-initiator access for a volume-type.

These extra-specs can be enabled by using the following command:

$ cinder type-key VOLUME_TYPE set KEY=VALUE

VOLUME_TYPE is the Nimble volume type and KEY and VALUE are the options mentioned above.

Configuration options

The Nimble storage driver supports these configuration options:

Description of Nimble driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nimble_pool_name = default (String) Nimble Controller pool name
nimble_subnet_label = * (String) Nimble Subnet Label
NexentaStor 4.x NFS and iSCSI drivers

NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) platform delivering unified file (NFS and SMB) and block (FC and iSCSI) storage services, runs on industry standard hardware, scales from tens of terabytes to petabyte configurations, and includes all data management functionality by default.

For NexentaStor 4.x user documentation, visit https://nexenta.com/products/downloads/nexentastor.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Change volume type.
Nexenta iSCSI driver

The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta namespace. The Nexenta iSCSI volume driver should work with all versions of NexentaStor.

The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A volume and an enclosing namespace must be created for all iSCSI volumes to be accessed through the volume driver. This should be done as specified in the release-specific NexentaStor documentation.

The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple backend volume drivers.

You must configure these items for each NexentaStor appliance that the iSCSI volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta iSCSI driver
    volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
    
    # IP address of NexentaStor host (string value)
    nexenta_host=HOST-IP
    
    # Username for NexentaStor REST (string value)
    nexenta_user=USERNAME
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8457
    
    # Password for NexentaStor REST (string value)
    nexenta_password=PASSWORD
    
    # Volume on NexentaStor appliance (string value)
    nexenta_volume=volume_name
    

Note

nexenta_volume represents a zpool which is called volume on NS appliance. It must be pre-created before enabling the driver.

  1. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.
Nexenta NFS driver

The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS. Every Compute volume is represented by a single NFS file within a shared directory.

While the NFS protocols standardize file access for users, they do not standardize administrative actions such as taking snapshots or replicating file systems. The OpenStack Volume Drivers bring a common interface to these operations. The Nexenta NFS driver implements these standard actions using the ZFS management plane that is already deployed on NexentaStor appliances.

The Nexenta NFS volume driver should work with all versions of NexentaStor. The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A single-parent file system must be created for all virtual disk directories supported for OpenStack. This directory must be created and exported on each NexentaStor appliance. This should be done as specified in the release- specific NexentaStor documentation.

You must configure these items for each NexentaStor appliance that the NFS volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta NFS driver
    volume_driver=cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
    
    # Path to shares config file
    nexenta_shares_config=/home/ubuntu/shares.cfg
    

    Note

    Add your list of Nexenta NFS servers to the file you specified with the nexenta_shares_config option. For example, this is how this file should look:

    192.168.1.200:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.200:8457
    192.168.1.201:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.201:8457
    192.168.1.202:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.202:8457
    

Each line in this file represents an NFS share. The first part of the line is the NFS share URL, the second line is the connection URL to the NexentaStor Appliance.

Driver options

Nexenta Driver supports these options:

Description of Nexenta driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nexenta_blocksize = 4096 (Integer) Block size for datasets
nexenta_chunksize = 32768 (Integer) NexentaEdge iSCSI LUN object chunk size
nexenta_client_address = (String) NexentaEdge iSCSI Gateway client address for non-VIP service
nexenta_dataset_compression = on (String) Compression value for new ZFS folders.
nexenta_dataset_dedup = off (String) Deduplication value for new ZFS folders.
nexenta_dataset_description = (String) Human-readable description for the folder.
nexenta_host = (String) IP address of Nexenta SA
nexenta_iscsi_target_portal_port = 3260 (Integer) Nexenta target portal port
nexenta_mount_point_base = $state_path/mnt (String) Base directory that contains NFS share mount points
nexenta_nbd_symlinks_dir = /dev/disk/by-path (String) NexentaEdge logical path of directory to store symbolic links to NBDs
nexenta_nms_cache_volroot = True (Boolean) If set True cache NexentaStor appliance volroot option value.
nexenta_password = nexenta (String) Password to connect to Nexenta SA
nexenta_rest_port = 8080 (Integer) HTTP port to connect to Nexenta REST API server
nexenta_rest_protocol = auto (String) Use http or https for REST connection (default auto)
nexenta_rrmgr_compression = 0 (Integer) Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression.
nexenta_rrmgr_connections = 2 (Integer) Number of TCP connections.
nexenta_rrmgr_tcp_buf_size = 4096 (Integer) TCP Buffer size in KiloBytes.
nexenta_shares_config = /etc/cinder/nfs_shares (String) File with the list of available nfs shares
nexenta_sparse = False (Boolean) Enables or disables the creation of sparse datasets
nexenta_sparsed_volumes = True (Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time.
nexenta_target_group_prefix = cinder/ (String) Prefix for iSCSI target groups on SA
nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder- (String) IQN prefix for iSCSI targets
nexenta_user = admin (String) User name to connect to Nexenta SA
nexenta_volume = cinder (String) SA Pool that holds all volumes
NexentaStor 5.x NFS and iSCSI drivers

NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) platform delivering unified file (NFS and SMB) and block (FC and iSCSI) storage services. NexentaStor runs on industry standard hardware, scales from tens of terabytes to petabyte configurations, and includes all data management functionality by default.

For NexentaStor user documentation, visit: http://docs.nexenta.com/.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume.
  • Change volume type.
iSCSI driver

The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A pool and an enclosing namespace must be created for all iSCSI volumes to be accessed through the volume driver. This should be done as specified in the release-specific NexentaStor documentation.

The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple back-end volume drivers.

You must configure these items for each NexentaStor appliance that the iSCSI volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta iSCSI driver
    volume_driver=cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver
    
    # IP address of NexentaStor host (string value)
    nexenta_host=HOST-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8080
    
    # Username for NexentaStor Rest (string value)
    nexenta_user=USERNAME
    
    # Password for NexentaStor Rest (string value)
    nexenta_password=PASSWORD
    
    # Pool on NexentaStor appliance (string value)
    nexenta_volume=volume_name
    
    # Name of a parent Volume group where cinder created zvols will reside (string value)
    nexenta_volume_group = iscsi
    

    Note

    nexenta_volume represents a zpool, which is called pool on NS 5.x appliance. It must be pre-created before enabling the driver.

    Volume group does not need to be pre-created, the driver will create it if does not exist.

  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

NFS driver

The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS. Every Compute volume is represented by a single NFS file within a shared directory.

While the NFS protocols standardize file access for users, they do not standardize administrative actions such as taking snapshots or replicating file systems. The OpenStack Volume Drivers bring a common interface to these operations. The Nexenta NFS driver implements these standard actions using the ZFS management plane that already is deployed on NexentaStor appliances.

The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A single-parent file system must be created for all virtual disk directories supported for OpenStack. Create and export the directory on each NexentaStor appliance.

You must configure these items for each NexentaStor appliance that the NFS volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta NFS driver
    volume_driver=cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver
    
    # IP address or Hostname of NexentaStor host (string value)
    nas_host=HOST-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8080
    
    # Path to parent filesystem (string value)
    nas_share_path=POOL/FILESYSTEM
    
    # Specify NFS version
    nas_mount_options=vers=4
    
  2. Create filesystem on appliance and share via NFS. For example:

    "securityContexts": [
       {"readWriteList": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
        "root": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
        "securityModes": ["sys"]}]
    
  3. Create ACL for the filesystem. For example:

    {"type": "allow",
    "principal": "everyone@",
    "permissions": ["list_directory","read_data","add_file","write_data",
    "add_subdirectory","append_data","read_xattr","write_xattr","execute",
    "delete_child","read_attributes","write_attributes","delete","read_acl",
    "write_acl","write_owner","synchronize"],
    "flags": ["file_inherit","dir_inherit"]}
    
Driver options

Nexenta Driver supports these options:

Description of NexentaStor 5 driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nexenta_dataset_compression = on (String) Compression value for new ZFS folders.
nexenta_dataset_dedup = off (String) Deduplication value for new ZFS folders.
nexenta_dataset_description = (String) Human-readable description for the folder.
nexenta_host = (String) IP address of Nexenta SA
nexenta_iscsi_target_portal_port = 3260 (Integer) Nexenta target portal port
nexenta_mount_point_base = $state_path/mnt (String) Base directory that contains NFS share mount points
nexenta_ns5_blocksize = 32 (Integer) Block size for datasets
nexenta_rest_port = 8080 (Integer) HTTP port to connect to Nexenta REST API server
nexenta_rest_protocol = auto (String) Use http or https for REST connection (default auto)
nexenta_sparse = False (Boolean) Enables or disables the creation of sparse datasets
nexenta_sparsed_volumes = True (Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time.
nexenta_user = admin (String) User name to connect to Nexenta SA
nexenta_volume = cinder (String) SA Pool that holds all volumes
nexenta_volume_group = iscsi (String) Volume group for ns5
NexentaEdge NBD & iSCSI drivers

NexentaEdge is designed from the ground-up to deliver high performance Block and Object storage services and limitless scalability to next generation OpenStack clouds, petabyte scale active archives and Big Data applications. NexentaEdge runs on shared nothing clusters of industry standard Linux servers, and builds on Nexenta IP and patent pending Cloud Copy On Write (CCOW) technology to break new ground in terms of reliability, functionality and cost efficiency.

For NexentaEdge user documentation, visit http://docs.nexenta.com.

iSCSI driver

The NexentaEdge cluster must be installed and configured according to the relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created, as well as an iSCSI service on the NexentaEdge gateway node.

The NexentaEdge iSCSI driver is selected using the normal procedures for one or multiple back-end volume drivers.

You must configure these items for each NexentaEdge cluster that the iSCSI volume driver controls:

  1. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # Enable Nexenta iSCSI driver
    volume_driver = cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver
    
    # Specify the ip address for Rest API (string value)
    nexenta_rest_address = MANAGEMENT-NODE-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port=8080
    
    # Protocol used for Rest calls (string value, default=htpp)
    nexenta_rest_protocol = http
    
    # Username for NexentaEdge Rest (string value)
    nexenta_user=USERNAME
    
    # Password for NexentaEdge Rest (string value)
    nexenta_password=PASSWORD
    
    # Path to bucket containing iSCSI LUNs (string value)
    nexenta_lun_container = CLUSTER/TENANT/BUCKET
    
    # Name of pre-created iSCSI service (string value)
    nexenta_iscsi_service = SERVICE-NAME
    
    # IP address of the gateway node attached to iSCSI service above or
    # virtual IP address if an iSCSI Storage Service Group is configured in
    # HA mode (string value)
    nexenta_client_address = GATEWAY-NODE-IP
    
  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
NBD driver

As an alternative to using iSCSI, Amazon S3, or Openstack Swift protocols, NexentaEdge can provide access to cluster storage via a Network Block Device (NBD) interface.

The NexentaEdge cluster must be installed and configured according to the relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created. The driver requires NexentaEdge Service to run on Hypervisor Node (Nova) node. The node must sit on Replicast Network and only runs NexentaEdge service, does not require physical disks.

You must configure these items for each NexentaEdge cluster that the NBD volume driver controls:

  1. Make the following changes on data node /etc/cinder/cinder.conf file.

    # Enable Nexenta NBD driver
    volume_driver = cinder.volume.drivers.nexenta.nexentaedge.nbd.NexentaEdgeNBDDriver
    
    # Specify the ip address for Rest API (string value)
    nexenta_rest_address = MANAGEMENT-NODE-IP
    
    # Port for Rest API (integer value)
    nexenta_rest_port = 8080
    
    # Protocol used for Rest calls (string value, default=htpp)
    nexenta_rest_protocol = http
    
    # Username for NexentaEdge Rest (string value)
    nexenta_rest_user = USERNAME
    
    # Password for NexentaEdge Rest (string value)
    nexenta_rest_password = PASSWORD
    
    # Path to bucket containing iSCSI LUNs (string value)
    nexenta_lun_container = CLUSTER/TENANT/BUCKET
    
    # Path to directory to store symbolic links to block devices
    # (string value, default=/dev/disk/by-path)
    nexenta_nbd_symlinks_dir = /PATH/TO/SYMBOLIC/LINKS
    
  2. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
Driver options

Nexenta Driver supports these options:

Description of NexentaEdge driver configuration options
Configuration option = Default value Description
[DEFAULT]  
nexenta_blocksize = 4096 (Integer) Block size for datasets
nexenta_chunksize = 32768 (Integer) NexentaEdge iSCSI LUN object chunk size
nexenta_client_address = (String) NexentaEdge iSCSI Gateway client address for non-VIP service
nexenta_iscsi_service = (String) NexentaEdge iSCSI service name
nexenta_iscsi_target_portal_port = 3260 (Integer) Nexenta target portal port
nexenta_lun_container = (String) NexentaEdge logical path of bucket for LUNs
nexenta_rest_address = (String) IP address of NexentaEdge management REST API endpoint
nexenta_rest_password = nexenta (String) Password to connect to NexentaEdge
nexenta_rest_port = 8080 (Integer) HTTP port to connect to Nexenta REST API server
nexenta_rest_protocol = auto (String) Use http or https for REST connection (default auto)
nexenta_rest_user = admin (String) User name to connect to NexentaEdge
ProphetStor Fibre Channel and iSCSI drivers

ProhetStor Fibre Channel and iSCSI drivers add support for ProphetStor Flexvisor through the Block Storage service. ProphetStor Flexvisor enables commodity x86 hardware as software-defined storage leveraging well-proven ZFS for disk management to provide enterprise grade storage services such as snapshots, data protection with different RAID levels, replication, and deduplication.

The DPLFCDriver and DPLISCSIDriver drivers run volume operations by communicating with the ProphetStor storage system over HTTPS.

Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
Enable the Fibre Channel or iSCSI drivers

The DPLFCDriver and DPLISCSIDriver are installed with the OpenStack software.

  1. Query storage pool id to configure dpl_pool of the cinder.conf file.

    1. Log on to the storage system with administrator access.

      $ ssh root@STORAGE_IP_ADDRESS
      
    2. View the current usable pool id.

      $ flvcli show pool list
      - d5bd40b58ea84e9da09dcf25a01fdc07 : default_pool_dc07
      
    3. Use d5bd40b58ea84e9da09dcf25a01fdc07 to configure the dpl_pool of /etc/cinder/cinder.conf file.

      Note

      Other management commands can be referenced with the help command flvcli -h.

  2. Make the following changes on the volume node /etc/cinder/cinder.conf file.

    # IP address of SAN controller (string value)
    san_ip=STORAGE IP ADDRESS
    
    # Username for SAN controller (string value)
    san_login=USERNAME
    
    # Password for SAN controller (string value)
    san_password=PASSWORD
    
    # Use thin provisioning for SAN volumes? (boolean value)
    san_thin_provision=true
    
    # The port that the iSCSI daemon is listening on. (integer value)
    iscsi_port=3260
    
    # DPL pool uuid in which DPL volumes are stored. (string value)
    dpl_pool=d5bd40b58ea84e9da09dcf25a01fdc07
    
    # DPL port number. (integer value)
    dpl_port=8357
    
    # Uncomment one of the next two option to enable Fibre channel or iSCSI
    # FIBRE CHANNEL(uncomment the next line to enable the FC driver)
    #volume_driver=cinder.volume.drivers.prophetstor.dpl_fc.DPLFCDriver
    # iSCSI (uncomment the next line to enable the iSCSI driver)
    #volume_driver=cinder.volume.drivers.prophetstor.dpl_iscsi.DPLISCSIDriver
    
  3. Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.

The ProphetStor Fibre Channel or iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.

The following table contains the options supported by the ProphetStor storage driver.

Description of ProphetStor Fibre Channel and iSCSi drivers configuration options
Configuration option = Default value Description
[DEFAULT]  
dpl_pool = (String) DPL pool uuid in which DPL volumes are stored.
dpl_port = 8357 (Port number) DPL port number.
iscsi_port = 3260 (Port number) The port that the iSCSI daemon is listening on
san_ip = (String) IP address of SAN controller
san_login = admin (String) Username for SAN controller
san_password = (String) Password for SAN controller
san_thin_provision = True (Boolean) Use thin provisioning for SAN volumes?
Pure Storage iSCSI and Fibre Channel volume drivers

The Pure Storage FlashArray volume drivers for OpenStack Block Storage interact with configured Pure Storage arrays and support various operations.

Support for iSCSI storage protocol is available with the PureISCSIDriver Volume Driver class, and Fibre Channel with PureFCDriver.

All drivers are compatible with Purity FlashArrays that support the REST API version 1.2, 1.3, or 1.4 (Purity 4.0.0 and newer).

Limitations and known issues

If you do not set up the nodes hosting instances to use multipathing, all network connectivity will use a single physical port on the array. In addition to significantly limiting the available bandwidth, this means you do not have the high-availability and non-disruptive upgrade benefits provided by FlashArray. Multipathing must be used to take advantage of these benefits.

Supported operations
  • Create, delete, attach, detach, retype, clone, and extend volumes.
  • Create a volume from snapshot.
  • Create, list, and delete volume snapshots.
  • Create, list, update, and delete consistency groups.
  • Create, list, and delete consistency group snapshots.
  • Manage and unmanage a volume.
  • Manage and unmanage a snapshot.
  • Get volume statistics.
  • Create a thin provisioned volume.
  • Replicate volumes to remote Pure Storage array(s).
Configure OpenStack and Purity

You need to configure both your Purity array and your OpenStack cluster.

Note

These instructions assume that the cinder-api and cinder-scheduler services are installed and configured in your OpenStack cluster.

Configure the OpenStack Block Storage service

In these steps, you will edit the cinder.conf file to configure the OpenStack Block Storage service to enable multipathing and to use the Pure Storage FlashArray as back-end storage.

  1. Install Pure Storage PyPI module. A requirement for the Pure Storage driver is the installation of the Pure Storage Python SDK version 1.4.0 or later from PyPI.

    $ pip install purestorage
    
  2. Retrieve an API token from Purity. The OpenStack Block Storage service configuration requires an API token from Purity. Actions performed by the volume driver use this token for authorization. Also, Purity logs the volume driver’s actions as being performed by the user who owns this API token.

    If you created a Purity user account that is dedicated to managing your OpenStack Block Storage volumes, copy the API token from that user account.

    Use the appropriate create or list command below to display and copy the Purity API token:

    • To create a new API token:

      $ pureadmin create --api-token USER
      

      The following is an example output:

      $ pureadmin create --api-token pureuser
      Name      API Token                             Created
      pureuser  902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9  2014-08-04 14:50:30
      
    • To list an existing API token:

      $ pureadmin list --api-token --expose USER
      

      The following is an example output:

      $ pureadmin list --api-token --expose pureuser
      Name      API Token                             Created
      pureuser  902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9  2014-08-04 14:50:30
      
  3. Copy the API token retrieved (902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 from the examples above) to use in the next step.

  4. Edit the OpenStack Block Storage service configuration file. The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Pure Storage array:

    [DEFAULT]
    enabled_backends = puredriver-1
    default_volume_type = puredriver-1
    
    [puredriver-1]
    volume_backend_name = puredriver-1
    volume_driver = PURE_VOLUME_DRIVER
    san_ip = IP_PURE_MGMT
    pure_api_token = PURE_API_TOKEN
    use_multipath_for_image_xfer = True
    

    Replace the following variables accordingly:

    PURE_VOLUME_DRIVER

    Use either cinder.volume.drivers.pure.PureISCSIDriver for iSCSI or cinder.volume.drivers.pure.PureFCDriver for Fibre Channel connectivity.

    IP_PURE_MGMT

    The IP address of the Pure Storage array’s management interface or a domain name that resolves to that IP address.

    PURE_API_TOKEN

    The Purity Authorization token that the volume driver uses to perform volume management on the Pure Storage array.

Note

The volume driver automatically creates Purity host objects for initiators as needed. If CHAP authentication is enabled via the use_chap_auth setting, you must ensure there are no manually created host objects with IQN’s that will be used by the OpenStack Block Storage service. The driver will only modify credentials on hosts that it manages.

Note

If using the PureFCDriver it is recommended to use the OpenStack Block Storage Fibre Channel Zone Manager.

Volume auto-eradication

To enable auto-eradication of deleted volumes, snapshots, and consistency groups on deletion, modify the following option in the cinder.conf file:

pure_eradicate_on_delete = true

By default, auto-eradication is disabled and all deleted volumes, snapshots, and consistency groups are retained on the Pure Storage array in a recoverable state for 24 hours from time of deletion.

SSL certification

To enable SSL certificate validation, modify the following option in the cinder.conf file:

driver_ssl_cert_verify = true

By default, SSL certificate validation is disabled.

To specify a non-default path to CA_Bundle file or directory with certificates of trusted CAs:

driver_ssl_cert_path = Certificate path

Note

This requires the use of Pure Storage Python SDK > 1.4.0.

Replication configuration

Add the following to the back-end specification to specify another Flash Array to replicate to:

[puredriver-1]
replication_device = backend_id:PURE2_NAME,san_ip:IP_PURE2_MGMT,api_token:PURE2_API_TOKEN

Where PURE2_NAME is the name of the remote Pure Storage system, IP_PURE2_MGMT is the management IP address of the remote array, and PURE2_API_TOKEN is the Purity Authorization token of the remote array.

Note that more than one replication_device line can be added to allow for multi-target device replication.

A volume is only replicated if the volume is of a volume-type that has the extra spec replication_enabled set to <is> True.

To create a volume type that specifies replication to remote back ends:

$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'

The following table contains the optional configuration parameters available for replication configuration with the Pure Storage array.

Option Description Default
pure_replica_interval_default Snapshot replication interval in seconds. 900
pure_replica_retention_short_term_default Retain all snapshots on target for this time (in seconds). 14400
pure_replica_retention_long_term_per_day_default Retain how many snapshots for each day. 3
pure_replica_retention_long_term_default Retain snapshots per day on target for this time (in days). 7

Note

replication-failover is only supported from the primary array to any of the multiple secondary arrays, but subsequent replication-failover is only supported back to the original primary array.

Automatic thin-provisioning/oversubscription ratio

To enable this feature where we calculate the array oversubscription ratio as (total provisioned/actual used), add the following option in the cinder.conf file:

[puredriver-1]
pure_automatic_max_oversubscription_ratio = True

By default, this is disabled and we honor the hard-coded configuration option max_over_subscription_ratio.

Note

Arrays with very good data reduction rates (compression/data deduplication/thin provisioning) can get very large oversubscription rates applied.

Scheduling metrics

A large number of metrics are reported by the volume driver which can be useful in implementing more control over volume placement in multi-backend environments using the driver filter and weighter methods.

Metrics reported include, but are not limited to:

total_capacity_gb
free_capacity_gb
provisioned_capacity
total_volumes
total_snapshots
total_hosts
total_pgroups
writes_per_sec
reads_per_sec
input_per_sec
output_per_sec
usec_per_read_op
usec_per_read_op
queue_depth

Note

All total metrics include non-OpenStack managed objects on the array.

In conjunction with QOS extra-specs, you can create very complex algorithms to manage volume placement. More detailed documentation on this is available in other external documentation.

Quobyte driver

The Quobyte volume driver enables storing Block Storage service volumes on a Quobyte storage back end. Block Storage service back ends are mapped to Quobyte volumes and individual Block Storage service volumes are stored as files on a Quobyte volume. Selection of the appropriate Quobyte volume is done by the aforementioned back end configuration that specifies the Quobyte volume explicitly.

Note

Note the dual use of the term volume in the context of Block Storage service volumes and in the context of Quobyte volumes.

For more information see the Quobyte support webpage.

Supported operations

The Quobyte volume driver supports the following volume operations:

  • Create, delete, attach, and detach volumes
  • Secure NAS operation (Starting with Mitaka release secure NAS operation is optional but still default)
  • Create and delete a snapshot
  • Create a volume from a snapshot
  • Extend a volume
  • Clone a volume
  • Copy a volume to image
  • Generic volume migration (no back end optimization)

Note

When running VM instances off Quobyte volumes, ensure that the Quobyte Compute service driver has been configured in your OpenStack cloud.

Configuration

To activate the Quobyte volume driver, configure the corresponding volume_driver parameter:

volume_driver = cinder.volume.drivers.quobyte.QuobyteDriver

The following table contains the configuration options supported by the Quobyte driver:

Description of Quobyte USP volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
quobyte_client_cfg = None (String) Path to a Quobyte Client configuration file.
quobyte_mount_point_base = $state_path/mnt (String) Base dir containing the mount point for the Quobyte volume.
quobyte_qcow2_volumes = True (Boolean) Create volumes as QCOW2 files rather than raw files.
quobyte_sparsed_volumes = True (Boolean) Create volumes as sparse files which take no space. If set to False, volume is created as regular file.In such case volume creation takes a lot of time.
quobyte_volume_url = None (String) URL to the Quobyte volume e.g., quobyte://<DIR host>/<volume name>
Scality SOFS driver

The Scality SOFS volume driver interacts with configured sfused mounts.

The Scality SOFS driver manages volumes as sparse files stored on a Scality Ring through sfused. Ring connection settings and sfused options are defined in the cinder.conf file and the configuration file pointed to by the scality_sofs_config option, typically /etc/sfused.conf.

Supported operations

The Scality SOFS volume driver provides the following Block Storage volume operations:

  • Create, delete, attach (map), and detach (unmap) volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Backup a volume.
  • Restore backup to new or existing volume.
Configuration

Use the following instructions to update the cinder.conf configuration file:

[DEFAULT]
enabled_backends = scality-1

[scality-1]
volume_driver = cinder.volume.drivers.scality.ScalityDriver
volume_backend_name = scality-1

scality_sofs_config = /etc/sfused.conf
scality_sofs_mount_point = /cinder
scality_sofs_volume_dir = cinder/volumes
Compute configuration

Use the following instructions to update the nova.conf configuration file:

[libvirt]
scality_sofs_mount_point = /cinder
scality_sofs_config = /etc/sfused.conf
Description of Scality SOFS volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
scality_sofs_config = None (String) Path or URL to Scality SOFS configuration file
scality_sofs_mount_point = $state_path/scality (String) Base dir where Scality SOFS shall be mounted
scality_sofs_volume_dir = cinder/volumes (String) Path from Scality SOFS root to volume dir
SolidFire

The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.

To configure the use of a SolidFire cluster with Block Storage, modify your cinder.conf file as follows:

volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182         # the address of your MVIP
san_login = sfadmin           # your cluster admin login
san_password = sfpassword     # your cluster admin password
sf_account_prefix = ''        # prefix for tenant account creation on solidfire cluster

Warning

Older versions of the SolidFire driver (prior to Icehouse) created a unique account prefixed with $cinder-volume-service-hostname-$tenant-id on the SolidFire cluster for each tenant. Unfortunately, this account formation resulted in issues for High Availability (HA) installations and installations where the cinder-volume service can move to a new node. The current default implementation does not experience this issue as no prefix is used. For installations created on a prior release, the OLD default behavior can be configured by using the keyword hostname in sf_account_prefix.

Note

The SolidFire driver creates names for volumes on the back end using the format UUID-<cinder-id>. This works well, but there is a possibility of a UUID collision for customers running multiple clouds against the same cluster. In Mitaka the ability was added to eliminate the possibility of collisions by introducing the sf_volume_prefix configuration variable. On the SolidFire cluster each volume will be labeled with the prefix, providing the ability to configure unique volume names for each cloud. The default prefix is ‘UUID-‘.

Changing the setting on an existing deployment will result in the existing volumes being inaccessible. To introduce this change to an existing deployment it is recommended to add the Cluster as if it were a second backend and disable new deployments to the current back end.

Description of SolidFire driver configuration options
Configuration option = Default value Description
[DEFAULT]  
sf_account_prefix = None (String) Create SolidFire accounts with this prefix. Any string can be used here, but the string “hostname” is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix.
sf_allow_template_caching = True (Boolean) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls.
sf_allow_tenant_qos = False (Boolean) Allow tenants to specify QOS on create
sf_api_port = 443 (Port number) SolidFire API port. Useful if the device api is behind a proxy on a different port.
sf_emulate_512 = True (Boolean) Set 512 byte emulation on volume creation;
sf_enable_vag = False (Boolean) Utilize volume access groups on a per-tenant basis.
sf_enable_volume_mapping = True (Boolean) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False.
sf_svip = None (String) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud.
sf_template_account_name = openstack-vtemplate (String) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist).
sf_volume_prefix = UUID- (String) Create SolidFire volumes with this prefix. Volume names are of the form <sf_volume_prefix><cinder-volume-id>. The default is to use a prefix of ‘UUID-‘.
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Retype a volume.
  • Manage and unmanage a volume.

QoS support for the SolidFire drivers includes the ability to set the following capabilities in the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage qos specs extension module:

  • minIOPS - The minimum number of IOPS guaranteed for this volume. Default = 100.
  • maxIOPS - The maximum number of IOPS allowed for this volume. Default = 15,000.
  • burstIOPS - The maximum number of IOPS allowed over a short period of time. Default = 15,000.

The QoS keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:

$ cinder help qos-create

$ cinder help qos-key

$ cinder help qos-associate
Synology DSM volume driver

The SynoISCSIDriver volume driver allows Synology NAS to be used for Block Storage (cinder) in OpenStack deployments. Information on OpenStack Block Storage volumes is available in the DSM Storage Manager.

System requirements

The Synology driver has the following requirements:

  • DSM version 6.0.2 or later.
  • Your Synology NAS model must support advanced file LUN, iSCSI Target, and snapshot features. Refer to the Support List for applied models.

Note

The DSM driver is available in the OpenStack Newton release.

Supported operations
  • Create, delete, clone, attach, and detach volumes.
  • Create and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Extend a volume.
  • Get volume statistics.
Driver configuration

Edit the /etc/cinder/cinder.conf file on your volume driver host.

Synology driver uses a volume in Synology NAS as the back end of Block Storage. Every time you create a new Block Storage volume, the system will create an advanced file LUN in your Synology volume to be used for this new Block Storage volume.

The following example shows how to use different Synology NAS servers as the back end. If you want to use all volumes on your Synology NAS, add another section with the volume number to differentiate between volumes within the same Synology NAS.

[default]
enabled_backends = ds1515pV1, ds1515pV2, rs3017xsV3, others

[ds1515pV1]
# configuration for volume 1 in DS1515+

[ds1515pV2]
# configuration for volume 2 in DS1515+

[rs3017xsV1]
# configuration for volume 1 in RS3017xs

Each section indicates the volume number and the way in which the connection is established. Below is an example of a basic configuration:

[Your_Section_Name]

# Required settings
volume_driver = cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver
iscs_protocol = iscsi
iscsi_ip_address = DS_IP
synology_admin_port = DS_PORT
synology_username = DS_USER
synology_password = DS_PW
synology_pool_name = DS_VOLUME

# Optional settings
volume_backend_name = VOLUME_BACKEND_NAME
iscsi_secondary_ip_addresses = IP_ADDRESSES
driver_use_ssl = True
use_chap_auth = True
chap_username = CHAP_USER_NAME
chap_password = CHAP_PASSWORD
DS_PORT
This is the port for DSM management. The default value for DSM is 5000 (HTTP) and 5001 (HTTPS). To use HTTPS connections, you must set driver_use_ssl = True.
DS_IP
This is the IP address of your Synology NAS.
DS_USER
This is the account of any DSM administrator.
DS_PW
This is the password for DS_USER.
DS_VOLUME
This is the volume you want to use as the storage pool for the Block Storage service. The format is volume[0-9]+, and the number is the same as the volume number in DSM.

Note

If you set driver_use_ssl as True, synology_admin_port must be an HTTPS port.

Configuration options

The Synology DSM driver supports the following configuration options:

Description of Synology volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
pool_type = default (String) Pool type, like sata-2copy.
synology_admin_port = 5000 (Port number) Management port for Synology storage.
synology_device_id = None (String) Device id for skip one time password check for logging in Synology storage if OTP is enabled.
synology_one_time_pass = None (String) One time password of administrator for logging in Synology storage if OTP is enabled.
synology_password = (String) Password of administrator for logging in Synology storage.
synology_pool_name = (String) Volume on Synology storage to be used for creating lun.
synology_ssl_verify = True (Boolean) Do certificate validation or not if $driver_use_ssl is True
synology_username = admin (String) Administrator of Synology storage.
Tintri

Tintri VMstore is a smart storage that sees, learns, and adapts for cloud and virtualization. The Tintri Block Storage driver interacts with configured VMstore running Tintri OS 4.0 and above. It supports various operations using Tintri REST APIs and NFS protocol.

To configure the use of a Tintri VMstore with Block Storage, perform the following actions:

  1. Edit the etc/cinder/cinder.conf file and set the cinder.volume.drivers.tintri options:

    volume_driver=cinder.volume.drivers.tintri.TintriDriver
    # Mount options passed to the nfs client. See section of the
    # nfs man page for details. (string value)
    nfs_mount_options = vers=3,lookupcache=pos
    
    #
    # Options defined in cinder.volume.drivers.tintri
    #
    
    # The hostname (or IP address) for the storage system (string
    # value)
    tintri_server_hostname = {Tintri VMstore Management IP}
    
    # User name for the storage system (string value)
    tintri_server_username = {username}
    
    # Password for the storage system (string value)
    tintri_server_password = {password}
    
    # API version for the storage system (string value)
    # tintri_api_version = v310
    
    # Following options needed for NFS configuration
    # File with the list of available nfs shares (string value)
    # nfs_shares_config = /etc/cinder/nfs_shares
    
    # Tintri driver will clean up unused image snapshots. With the following
    # option, users can configure how long unused image snapshots are
    # retained. Default retention policy is 30 days
    # tintri_image_cache_expiry_days = 30
    
    # Path to NFS shares file storing images.
    # Users can store Glance images in the NFS share of the same VMstore
    # mentioned in the following file. These images need to have additional
    # metadata ``provider_location`` configured in Glance, which should point
    # to the NFS share path of the image.
    # This option will enable Tintri driver to directly clone from Glance
    # image stored on same VMstore (rather than downloading image
    # from Glance)
    # tintri_image_shares_config = <Path to image NFS share>
    #
    # For example:
    # Glance image metadata
    # provider_location =>
    # nfs://<data_ip>/tintri/glance/84829294-c48b-4e16-a878-8b2581efd505
    
  2. Edit the /etc/nova/nova.conf file and set the nfs_mount_options:

    nfs_mount_options = vers=3
    
  3. Edit the /etc/cinder/nfs_shares file and add the Tintri VMstore mount points associated with the configured VMstore management IP in the cinder.conf file:

    {vmstore_data_ip}:/tintri/{submount1}
    {vmstore_data_ip}:/tintri/{submount2}
    
Description of Tintri volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
tintri_api_version = v310 (String) API version for the storage system
tintri_image_cache_expiry_days = 30 (Integer) Delete unused image snapshots older than mentioned days
tintri_image_shares_config = None (String) Path to image nfs shares file
tintri_server_hostname = None (String) The hostname (or IP address) for the storage system
tintri_server_password = None (String) Password for the storage system
tintri_server_username = None (String) User name for the storage system
Violin Memory 7000 Series FSP volume driver

The OpenStack V7000 driver package from Violin Memory adds Block Storage service support for Violin 7300 Flash Storage Platforms (FSPs) and 7700 FSP controllers.

The driver package release can be used with any OpenStack Liberty deployment for all 7300 FSPs and 7700 FSP controllers running Concerto 7.5.3 and later using Fibre Channel HBAs.

System requirements

To use the Violin driver, the following are required:

  • Violin 7300/7700 series FSP with:

    • Concerto OS version 7.5.3 or later
    • Fibre channel host interfaces
  • The Violin block storage driver: This driver implements the block storage API calls. The driver is included with the OpenStack Liberty release.

  • The vmemclient library: This is the Violin Array Communications library to the Flash Storage Platform through a REST-like interface. The client can be installed using the python ‘pip’ installer tool. Further information on vmemclient can be found on PyPI.

    pip install vmemclient
    
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.

Note

Listed operations are supported for thick, thin, and dedup luns, with the exception of cloning. Cloning operations are supported only on thick luns.

Driver configuration

Once the array is configured as per the installation guide, it is simply a matter of editing the cinder configuration file to add or modify the parameters. The driver currently only supports fibre channel configuration.

Fibre channel configuration

Set the following in your cinder.conf configuration file, replacing the variables using the guide in the following section:

volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver
volume_backend_name = vmem_violinfsp
extra_capabilities = VMEM_CAPABILITIES
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
use_multipath_for_image_xfer = true
Configuration parameters

Description of configuration value placeholders:

VMEM_CAPABILITIES
User defined capabilities, a JSON formatted string specifying key-value pairs (string value). The ones particularly supported are dedup and thin. Only these two capabilities are listed here in cinder.conf file, indicating this backend be selected for creating luns which have a volume type associated with them that have dedup or thin extra_specs specified. For example, if the FSP is configured to support dedup luns, set the associated driver capabilities to: {“dedup”:”True”,”thin”:”True”}.
VMEM_MGMT_IP
External IP address or host name of the Violin 7300 Memory Gateway. This can be an IP address or host name.
VMEM_USER_NAME
Log-in user name for the Violin 7300 Memory Gateway or 7700 FSP controller. This user must have administrative rights on the array or controller.
VMEM_PASSWORD
Log-in user’s password.
Virtuozzo Storage driver

The Virtuozzo Storage driver is a fault-tolerant distributed storage system that is optimized for virtualization workloads. Set the following in your cinder.conf file, and use the following options to configure it.

volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
Description of Virtuozzo Storage volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
vzstorage_default_volume_format = raw (String) Default format that will be used when creating volumes if no volume format is specified.
vzstorage_mount_options = None (List) Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details.
vzstorage_mount_point_base = $state_path/mnt (String) Base dir containing mount points for vzstorage shares.
vzstorage_shares_config = /etc/cinder/vzstorage_shares (String) File with the list of available vzstorage shares.
vzstorage_sparsed_volumes = True (Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time.
vzstorage_used_ratio = 0.95 (Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
VMware VMDK driver

Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK files on data stores that use any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN.

Note

The VMware VMDK driver requires vCenter version 5.1 at minimum.

Functional context

The VMware VMDK driver connects to vCenter, through which it can dynamically access all the data stores visible from the ESX hosts in the managed cluster.

When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK file creation completes only when the volume is subsequently attached to an instance. The reason for this requirement is that data stores visible to the instance determine where to place the volume. Before the service creates the VMDK file, attach a volume to the target instance.

The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra disk. Once attached, you can log in to the running vSphere VM to rescan and discover this extra disk.

With the update to ESX version 6.0, the VMDK driver now supports NFS version 4.1.

Configuration

The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to the same server.

In the nova.conf file, use this option to define the Compute driver:

compute_driver = vmwareapi.VMwareVCDriver

In the cinder.conf file, use this option to define the volume driver:

volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver

The following table lists various options that the drivers support for the OpenStack Block Storage configuration (cinder.conf):

Description of VMware configuration options
Configuration option = Default value Description
[DEFAULT]  
vmware_api_retry_count = 10 (Integer) Number of times VMware vCenter server API must be retried upon connection related issues.
vmware_ca_file = None (String) CA bundle file to use in verifying the vCenter server certificate.
vmware_cluster_name = None (Multi-valued) Name of a vCenter compute cluster where volumes should be created.
vmware_host_ip = None (String) IP address for connecting to VMware vCenter server.
vmware_host_password = None (String) Password for authenticating with VMware vCenter server.
vmware_host_port = 443 (Port number) Port number for connecting to VMware vCenter server.
vmware_host_username = None (String) Username for authenticating with VMware vCenter server.
vmware_host_version = None (String) Optional string specifying the VMware vCenter server version. The driver attempts to retrieve the version from VMware vCenter server. Set this configuration only if you want to override the vCenter server version.
vmware_image_transfer_timeout_secs = 7200 (Integer) Timeout in seconds for VMDK volume transfer between Cinder and Glance.
vmware_insecure = False (Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if “vmware_ca_file” is set.
vmware_max_objects_retrieval = 100 (Integer) Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value.
vmware_task_poll_interval = 2.0 (Floating point) The interval (in seconds) for polling remote tasks invoked on VMware vCenter server.
vmware_tmp_dir = /tmp (String) Directory where virtual disks are stored during volume backup and restore.
vmware_volume_folder = Volumes (String) Name of the vCenter inventory folder that will contain Cinder volumes. This folder will be created under “OpenStack/<project_folder>”, where project_folder is of format “Project (<volume_project_id>)”.
vmware_wsdl_location = None (String) Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl. Optional over-ride to default location for bug work-arounds.
VMDK disk type

The VMware VMDK drivers support the creation of VMDK disk file types thin, lazyZeroedThick (sometimes called thick or flat), or eagerZeroedThick.

A thin virtual disk is allocated and zeroed on demand as the space is used. Unused space on a Thin disk is available to other users.

A lazy zeroed thick virtual disk will have all space allocated at disk creation. This reserves the entire disk space, so it is not available to other users at any time.

An eager zeroed thick virtual disk is similar to a lazy zeroed thick disk, in that the entire disk is allocated at creation. However, in this type, any previous data will be wiped clean on the disk before the write. This can mean that the disk will take longer to create, but can also prevent issues with stale data on physical media.

Use the vmware:vmdk_type extra spec key with the appropriate value to specify the VMDK disk file type. This table shows the mapping between the extra spec entry and the VMDK disk file type:

Extra spec entry to VMDK disk file type mapping
Disk file type Extra spec key Extra spec value
thin vmware:vmdk_type thin
lazyZeroedThick vmware:vmdk_type thick
eagerZeroedThick vmware:vmdk_type eagerZeroedThick

If you do not specify a vmdk_type extra spec entry, the disk file type will default to thin.

The following example shows how to create a lazyZeroedThick VMDK volume by using the appropriate vmdk_type:

$ cinder type-create thick_volume
$ cinder type-key thick_volume set vmware:vmdk_type=thick
$ cinder create --volume-type thick_volume --display-name volume1 1
Clone type

With the VMware VMDK drivers, you can create a volume from another source volume or a snapshot point. The VMware vCenter VMDK driver supports the full and linked/fast clone types. Use the vmware:clone_type extra spec key to specify the clone type. The following table captures the mapping for clone types:

Extra spec entry to clone type mapping
Clone type Extra spec key Extra spec value
full vmware:clone_type full
linked/fast vmware:clone_type linked

If you do not specify the clone type, the default is full.

The following example shows linked cloning from a source volume, which is created from an image:

$ cinder type-create fast_clone
$ cinder type-key fast_clone set vmware:clone_type=linked
$ cinder create --image-id 9cb87f4f-a046-47f5-9b7c-d9487b3c7cd4 \
  --volume-type fast_clone --display-name source-vol 1
$ cinder create --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 \
  --display-name dest-vol 1
Use vCenter storage policies to specify back-end data stores

This section describes how to configure back-end data stores using storage policies. In vCenter 5.5 and greater, you can create one or more storage policies and expose them as a Block Storage volume-type to a vmdk volume. The storage policies are exposed to the vmdk driver through the extra spec property with the vmware:storage_profile key.

For example, assume a storage policy in vCenter named gold_policy. and a Block Storage volume type named vol1 with the extra spec key vmware:storage_profile set to the value gold_policy. Any Block Storage volume creation that uses the vol1 volume type places the volume only in data stores that match the gold_policy storage policy.

The Block Storage back-end configuration for vSphere data stores is automatically determined based on the vCenter configuration. If you configure a connection to connect to vCenter version 5.5 or later in the cinder.conf file, the use of storage policies to configure back-end data stores is automatically supported.

Note

You must configure any data stores that you configure for the Block Storage service for the Compute service.

To configure back-end data stores by using storage policies

  1. In vCenter, tag the data stores to be used for the back end.

    OpenStack also supports policies that are created by using vendor-specific capabilities; for example vSAN-specific storage policies.

    Note

    The tag value serves as the policy. For details, see Storage policy-based configuration in vCenter.

  2. Set the extra spec key vmware:storage_profile in the desired Block Storage volume types to the policy name that you created in the previous step.

  3. Optionally, for the vmware_host_version parameter, enter the version number of your vSphere platform. For example, 5.5.

    This setting overrides the default location for the corresponding WSDL file. Among other scenarios, you can use this setting to prevent WSDL error messages during the development phase or to work with a newer version of vCenter.

  4. Complete the other vCenter configuration parameters as appropriate.

Note

Any volume that is created without an associated policy (that is to say, without an associated volume type that specifies vmware:storage_profile extra spec), there is no policy-based placement for that volume.

Supported operations

The VMware vCenter VMDK driver supports these operations:

  • Create, delete, attach, and detach volumes.

    Note

    When a volume is attached to an instance, a reconfigure operation is performed on the instance to add the volume’s VMDK to it. The user must manually rescan and mount the device from within the guest operating system.

  • Create, list, and delete volume snapshots.

    Note

    Allowed only if volume is not attached to an instance.

  • Create a volume from a snapshot.

  • Copy an image to a volume.

    Note

    Only images in vmdk disk format with bare container format are supported. The vmware_disktype property of the image can be preallocated, sparse, streamOptimized or thin.

  • Copy a volume to an image.

    Note

    • Allowed only if the volume is not attached to an instance.
    • This operation creates a streamOptimized disk image.
  • Clone a volume.

    Note

    Supported only if the source volume is not attached to an instance.

  • Backup a volume.

    Note

    This operation creates a backup of the volume in streamOptimized disk format.

  • Restore backup to new or existing volume.

    Note

    Supported only if the existing volume doesn’t contain snapshots.

  • Change the type of a volume.

    Note

    This operation is supported only if the volume state is available.

  • Extend a volume.

Storage policy-based configuration in vCenter

You can configure Storage Policy-Based Management (SPBM) profiles for vCenter data stores supporting the Compute, Image service, and Block Storage components of an OpenStack implementation.

In a vSphere OpenStack deployment, SPBM enables you to delegate several data stores for storage, which reduces the risk of running out of storage space. The policy logic selects the data store based on accessibility and available storage space.

Prerequisites
  • Determine the data stores to be used by the SPBM policy.
  • Determine the tag that identifies the data stores in the OpenStack component configuration.
  • Create separate policies or sets of data stores for separate OpenStack components.
Create storage policies in vCenter
  1. In vCenter, create the tag that identifies the data stores:

    1. From the Home screen, click Tags.
    2. Specify a name for the tag.
    3. Specify a tag category. For example, spbm-cinder.
  2. Apply the tag to the data stores to be used by the SPBM policy.

    Note

    For details about creating tags in vSphere, see the vSphere documentation.

  3. In vCenter, create a tag-based storage policy that uses one or more tags to identify a set of data stores.

    Note

    For details about creating storage policies in vSphere, see the vSphere documentation.

Data store selection

If storage policy is enabled, the driver initially selects all the data stores that match the associated storage policy.

If two or more data stores match the storage policy, the driver chooses a data store that is connected to the maximum number of hosts.

In case of ties, the driver chooses the data store with lowest space utilization, where space utilization is defined by the (1-freespace/totalspace) meters.

These actions reduce the number of volume migrations while attaching the volume to instances.

The volume must be migrated if the ESX host for the instance cannot access the data store that contains the volume.

Windows iSCSI volume driver

Windows Server 2012 and Windows Storage Server 2012 offer an integrated iSCSI Target service that can be used with OpenStack Block Storage in your stack. Being entirely a software solution, consider it in particular for mid-sized networks where the costs of a SAN might be excessive.

The Windows Block Storage driver works with OpenStack Compute on any hypervisor. It includes snapshotting support and the boot from volume feature.

This driver creates volumes backed by fixed-type VHD images on Windows Server 2012 and dynamic-type VHDX on Windows Server 2012 R2, stored locally on a user-specified path. The system uses those images as iSCSI disks and exports them through iSCSI targets. Each volume has its own iSCSI target.

This driver has been tested with Windows Server 2012 and Windows Server R2 using the Server and Storage Server distributions.

Install the cinder-volume service as well as the required Python components directly onto the Windows node.

You may install and configure cinder-volume and its dependencies manually using the following guide or you may use the Cinder Volume Installer, presented below.

Installing using the OpenStack cinder volume installer

In case you want to avoid all the manual setup, you can use Cloudbase Solutions’ installer. You can find it at https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi. It installs an independent Python environment, in order to avoid conflicts with existing applications, dynamically generates a cinder.conf file based on the parameters provided by you.

cinder-volume will be configured to run as a Windows Service, which can be restarted using:

PS C:\> net stop cinder-volume ; net start cinder-volume

The installer can also be used in unattended mode. More details about how to use the installer and its features can be found at https://www.cloudbase.it.

Windows Server configuration

The required service in order to run cinder-volume on Windows is wintarget. This will require the iSCSI Target Server Windows feature to be installed. You can install it by running the following command:

PS C:\> Add-WindowsFeature
FS-iSCSITarget-ServerAdd-WindowsFeatureFS-iSCSITarget-Server

Note

The Windows Server installation requires at least 16 GB of disk space. The volumes hosted by this node need the extra space.

For cinder-volume to work properly, you must configure NTP as explained in Configure NTP.

Next, install the requirements as described in Requirements.

Getting the code

Git can be used to download the necessary source code. The installer to run Git on Windows can be downloaded here:

https://git-for-windows.github.io/

Once installed, run the following to clone the OpenStack Block Storage code:

PS C:\> git.exe clone https://git.openstack.org/openstack/cinder
Configure cinder-volume

The cinder.conf file may be placed in C:\etc\cinder. Below is a configuration sample for using the Windows iSCSI Driver:

[DEFAULT]
auth_strategy = keystone
volume_name_template = volume-%s
volume_driver = cinder.volume.drivers.windows.WindowsDriver
glance_api_servers = IP_ADDRESS:9292
rabbit_host = IP_ADDRESS
rabbit_port = 5672
sql_connection = mysql+pymysql://root:Passw0rd@IP_ADDRESS/cinder
windows_iscsi_lun_path = C:\iSCSIVirtualDisks
rabbit_password = Passw0rd
logdir = C:\OpenStack\Log\
image_conversion_dir = C:\ImageConversionDir
debug = True

The following table contains a reference to the only driver specific option that will be used by the Block Storage Windows driver:

Description of Windows configuration options
Configuration option = Default value Description
[DEFAULT]  
windows_iscsi_lun_path = C:\iSCSIVirtualDisks (String) Path to store VHD backed volumes
Run cinder-volume

After configuring cinder-volume using the cinder.conf file, you may use the following commands to install and run the service (note that you must replace the variables with the proper paths):

PS C:\> python $CinderClonePath\setup.py install
PS C:\> cmd /c C:\python27\python.exe c:\python27\Scripts\cinder-volume" --config-file $CinderConfPath
X-IO volume driver

The X-IO volume driver for OpenStack Block Storage enables ISE products to be managed by OpenStack Block Storage nodes. This driver can be configured to work with iSCSI and Fibre Channel storage protocols. The X-IO volume driver allows the cloud operator to take advantage of ISE features like Quality of Service (QOS) and Continuous Adaptive Data Placement (CADP). It also supports creating thin volumes and specifying volume media affinity.

Requirements

ISE FW 2.8.0 or ISE FW 3.1.0 is required for OpenStack Block Storage support. The X-IO volume driver will not work with older ISE FW.

Supported operations
  • Create, delete, attach, detach, retype, clone, and extend volumes.
  • Create a volume from snapshot.
  • Create, list, and delete volume snapshots.
  • Manage and unmanage a volume.
  • Get volume statistics.
  • Create a thin provisioned volume.
  • Create volumes with QoS specifications.
Configure X-IO Volume driver

To configure the use of an ISE product with OpenStack Block Storage, modify your cinder.conf file as follows. Be careful to use the one that matches the storage protocol in use:

Fibre Channel
volume_driver = cinder.volume.drivers.xio.XIOISEFCDriver
san_ip = 1.2.3.4              # the address of your ISE REST management interface
san_login = administrator     # your ISE management admin login
san_password = password       # your ISE management admin password
iSCSI
volume_driver = cinder.volume.drivers.xio.XIOISEISCSIDriver
san_ip = 1.2.3.4              # the address of your ISE REST management interface
san_login = administrator     # your ISE management admin login
san_password = password       # your ISE management admin password
iscsi_ip_address = ionet_ip   # ip address to one ISE port connected to the IONET
Optional configuration parameters
Description of X-IO volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
driver_use_ssl = False (Boolean) Tell driver to use SSL for connection to backend storage if the driver supports it.
ise_completion_retries = 30 (Integer) Number on retries to get completion status after issuing a command to ISE.
ise_connection_retries = 5 (Integer) Number of retries (per port) when establishing connection to ISE management port.
ise_raid = 1 (Integer) Raid level for ISE volumes.
ise_retry_interval = 1 (Integer) Interval (secs) between retries.
ise_storage_pool = 1 (Integer) Default storage pool for volumes.
Multipath

The X-IO ISE supports a multipath configuration, but multipath must be enabled on the compute node (see ISE Storage Blade Best Practices Guide). For more information, see X-IO Document Library.

Volume types

OpenStack Block Storage uses volume types to help the administrator specify attributes for volumes. These attributes are called extra-specs. The X-IO volume driver support the following extra-specs.

Extra specs
Extra-specs name Valid values Description
Feature:Raid 1, 5 RAID level for volume.
Feature:Pool 1 - n (n being number of pools on ISE) Pool to create volume in.
Affinity:Type cadp, flash, hdd Volume media affinity type.
Alloc:Type 0 (thick), 1 (thin) Allocation type for volume. Thick or thin.
QoS:minIOPS n (value less than maxIOPS) Minimum IOPS setting for volume.
QoS:maxIOPS n (value bigger than minIOPS) Maximum IOPS setting for volume.
QoS:burstIOPS n (value bigger than minIOPS) Burst IOPS setting for volume.
Examples

Create a volume type called xio1-flash for volumes that should reside on ssd storage:

$ cinder type-create xio1-flash
$ cinder type-key xio1-flash set Affinity:Type=flash

Create a volume type called xio1 and set QoS min and max:

$ cinder type-create xio1
$ cinder type-key xio1 set QoS:minIOPS=20
$ cinder type-key xio1 set QoS:maxIOPS=5000
Zadara Storage VPSA volume driver

Zadara Storage, Virtual Private Storage Array (VPSA) is the first software defined, Enterprise-Storage-as-a-Service. It is an elastic and private block and file storage system which, provides enterprise-grade data protection and data management storage services.

The ZadaraVPSAISCSIDriver volume driver allows the Zadara Storage VPSA to be used as a volume backend storage in OpenStack deployments.

System requirements

To use Zadara Storage VPSA Volume Driver you will require:

  • Zadara Storage VPSA version 15.07 and above
  • iSCSI or iSER host interfaces
Supported operations
  • Create, delete, attach, and detach volumes
  • Create, list, and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Clone a volume
  • Extend a volume
  • Migrate a volume with backend assistance
Configuration
  1. Create a VPSA pool(s) or make sure you have an existing pool(s) that will be used for volume services. The VPSA pool(s) will be identified by its ID (pool-xxxxxxxx). For further details, see the VPSA’s user guide.
  2. Adjust the cinder.conf configuration file to define the volume driver name along with a storage backend entry for each VPSA pool that will be managed by the block storage service. Each backend entry requires a unique section name, surrounded by square brackets (or parentheses), followed by options in key=value format.

Note

Restart cinder-volume service after modifying cinder.conf.

Sample minimum backend configuration

[DEFAULT]
enabled_backends = vpsa

[vpsa]
zadara_vpsa_host = 172.31.250.10
zadara_vpsa_port = 80
zadara_user = vpsauser
zadara_password = mysecretpassword
zadara_use_iser = false
zadara_vpsa_poolname = pool-00000001
volume_driver = cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver
volume_backend_name = vpsa
Driver-specific options

This section contains the configuration options that are specific to the Zadara Storage VPSA driver.

Description of Zadara Storage driver configuration options
Configuration option = Default value Description
[DEFAULT]  
zadara_default_snap_policy = False (Boolean) VPSA - Attach snapshot policy for volumes
zadara_password = None (String) VPSA - Password
zadara_use_iser = True (Boolean) VPSA - Use ISER instead of iSCSI
zadara_user = None (String) VPSA - Username
zadara_vol_encrypt = False (Boolean) VPSA - Default encryption policy for volumes
zadara_vol_name_template = OS_%s (String) VPSA - Default template for VPSA volume names
zadara_vpsa_host = None (String) VPSA - Management Host name or IP address
zadara_vpsa_poolname = None (String) VPSA - Storage Pool assigned for volumes
zadara_vpsa_port = None (Port number) VPSA - Port number
zadara_vpsa_use_ssl = False (Boolean) VPSA - Use SSL connection

Note

By design, all volumes created within the VPSA are thin provisioned.

Oracle ZFS Storage Appliance iSCSI driver

Oracle ZFS Storage Appliances (ZFSSAs) provide advanced software to protect data, speed tuning and troubleshooting, and deliver high performance and high availability. Through the Oracle ZFSSA iSCSI Driver, OpenStack Block Storage can use an Oracle ZFSSA as a block storage resource. The driver enables you to create iSCSI volumes that an OpenStack Block Storage server can allocate to any virtual machine running on a compute host.

Requirements

The Oracle ZFSSA iSCSI Driver, version 1.0.0 and later, supports ZFSSA software release 2013.1.2.0 and later.

Supported operations
  • Create, delete, attach, detach, manage, and unmanage volumes.
  • Create and delete snapshots.
  • Create volume from snapshot.
  • Extend a volume.
  • Attach and detach volumes.
  • Get volume stats.
  • Clone volumes.
  • Migrate a volume.
  • Local cache of a bootable volume.
Configuration
  1. Enable RESTful service on the ZFSSA Storage Appliance.

  2. Create a new user on the appliance with the following authorizations:

    scope=stmf - allow_configure=true
    scope=nas - allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true
    

    You can create a role with authorizations as follows:

    zfssa:> configuration roles
    zfssa:configuration roles> role OpenStackRole
    zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack Cinder Driver"
    zfssa:configuration roles OpenStackRole (uncommitted)> commit
    zfssa:configuration roles> select OpenStackRole
    zfssa:configuration roles OpenStackRole> authorizations create
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=stmf
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
    

    You can create a user with a specific role as follows:

    zfssa:> configuration users
    zfssa:configuration users> user cinder
    zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver"
    zfssa:configuration users cinder (uncommitted)> set initial_password=12345
    zfssa:configuration users cinder (uncommitted)> commit
    zfssa:configuration users> select cinder set roles=OpenStackRole
    

    Note

    You can also run this workflow to automate the above tasks.

  3. Ensure that the ZFSSA iSCSI service is online. If the ZFSSA iSCSI service is not online, enable the service by using the BUI, CLI or REST API in the appliance.

    zfssa:> configuration services iscsi
    zfssa:configuration services iscsi> enable
    zfssa:configuration services iscsi> show
    Properties:
    <status>= online
    ...
    

    Define the following required properties in the cinder.conf file:

    volume_driver = cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
    san_ip = myhost
    san_login = username
    san_password = password
    zfssa_pool = mypool
    zfssa_project = myproject
    zfssa_initiator_group = default
    zfssa_target_portal = w.x.y.z:3260
    zfssa_target_interfaces = e1000g0
    

    Optionally, you can define additional properties.

    Target interfaces can be seen as follows in the CLI:

    zfssa:> configuration net interfaces
    zfssa:configuration net interfaces> show
    Interfaces:
    INTERFACE STATE CLASS LINKS    ADDRS          LABEL
    e1000g0   up    ip    e1000g0  1.10.20.30/24  Untitled Interface
    ...
    

    Note

    Do not use management interfaces for zfssa_target_interfaces.

ZFSSA assisted volume migration

The ZFSSA iSCSI driver supports storage assisted volume migration starting in the Liberty release. This feature uses remote replication feature on the ZFSSA. Volumes can be migrated between two backends configured not only to the same ZFSSA but also between two separate ZFSSAs altogether.

The following conditions must be met in order to use ZFSSA assisted volume migration:

  • Both the source and target backends are configured to ZFSSAs.
  • Remote replication service on the source and target appliance is enabled.
  • The ZFSSA to which the target backend is configured should be configured as a target in the remote replication service of the ZFSSA configured to the source backend. The remote replication target needs to be configured even when the source and the destination for volume migration are the same ZFSSA. Define zfssa_replication_ip in the cinder.conf file of the source backend as the IP address used to register the target ZFSSA in the remote replication service of the source ZFSSA.
  • The name of the iSCSI target group(zfssa_target_group) on the source and the destination ZFSSA is the same.
  • The volume is not attached and is in available state.

If any of the above conditions are not met, the driver will proceed with generic volume migration.

The ZFSSA user on the source and target appliances will need to have additional role authorizations for assisted volume migration to work. In scope nas, set allow_rrtarget and allow_rrsource to true.

zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=nas
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrtarget=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrsource=true
ZFSSA local cache

The local cache feature enables ZFSSA drivers to serve the usage of bootable volumes significantly better. With the feature, the first bootable volume created from an image is cached, so that subsequent volumes can be created directly from the cache, instead of having image data transferred over the network multiple times.

The following conditions must be met in order to use ZFSSA local cache feature:

  • A storage pool needs to be configured.
  • REST and iSCSI services need to be turned on.
  • On an OpenStack controller, cinder.conf needs to contain necessary properties used to configure and set up the ZFSSA iSCSI driver, including the following new properties:
    • zfssa_enable_local_cache: (True/False) To enable/disable the feature.
    • zfssa_cache_project: The ZFSSA project name where cache volumes are stored.

Every cache volume has two additional properties stored as ZFSSA custom schema. It is important that the schema are not altered outside of Block Storage when the driver is in use:

  • image_id: stores the image id as in Image service.
  • updated_at: stores the most current timestamp when the image is updated in Image service.
Supported extra specs

Extra specs provide the OpenStack storage admin the flexibility to create volumes with different characteristics from the ones specified in the cinder.conf file. The admin will specify the volume properties as keys at volume type creation. When a user requests a volume of this volume type, the volume will be created with the properties specified as extra specs.

The following extra specs scoped keys are supported by the driver:

  • zfssa:volblocksize
  • zfssa:sparse
  • zfssa:compression
  • zfssa:logbias

Volume types can be created using the cinder type-create command. Extra spec keys can be added using cinder type-key command.

Driver options

The Oracle ZFSSA iSCSI Driver supports these options:

Description of ZFS Storage Appliance iSCSI driver configuration options
Configuration option = Default value Description
[DEFAULT]  
zfssa_initiator = (String) iSCSI initiator IQNs. (comma separated)
zfssa_initiator_config = (String) iSCSI initiators configuration.
zfssa_initiator_group = (String) iSCSI initiator group.
zfssa_initiator_password = (String) Secret of the iSCSI initiator CHAP user.
zfssa_initiator_user = (String) iSCSI initiator CHAP user (name).
zfssa_lun_compression = off (String) Data compression.
zfssa_lun_logbias = latency (String) Synchronous write bias.
zfssa_lun_sparse = False (Boolean) Flag to enable sparse (thin-provisioned): True, False.
zfssa_lun_volblocksize = 8k (String) Block size.
zfssa_pool = None (String) Storage pool name.
zfssa_project = None (String) Project name.
zfssa_replication_ip = (String) IP address used for replication data. (maybe the same as data ip)
zfssa_rest_timeout = None (Integer) REST connection timeout. (seconds)
zfssa_target_group = tgt-grp (String) iSCSI target group name.
zfssa_target_interfaces = None (String) Network interfaces of iSCSI targets. (comma separated)
zfssa_target_password = (String) Secret of the iSCSI target CHAP user.
zfssa_target_portal = None (String) iSCSI target portal (Data-IP:Port, w.x.y.z:3260).
zfssa_target_user = (String) iSCSI target CHAP user (name).
Oracle ZFS Storage Appliance NFS driver

The Oracle ZFS Storage Appliance (ZFSSA) NFS driver enables the ZFSSA to be used seamlessly as a block storage resource. The driver enables you to to create volumes on a ZFS share that is NFS mounted.

Requirements

Oracle ZFS Storage Appliance Software version 2013.1.2.0 or later.

Supported operations
  • Create, delete, attach, detach, manage, and unmanage volumes.
  • Create and delete snapshots.
  • Create a volume from a snapshot.
  • Extend a volume.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Volume migration.
  • Local cache of a bootable volume
Appliance configuration

Appliance configuration using the command-line interface (CLI) is described below. To access the CLI, ensure SSH remote access is enabled, which is the default. You can also perform configuration using the browser user interface (BUI) or the RESTful API. Please refer to the Oracle ZFS Storage Appliance documentation for details on how to configure the Oracle ZFS Storage Appliance using the BUI, CLI, and RESTful API.

  1. Log in to the Oracle ZFS Storage Appliance CLI and enable the REST service. REST service needs to stay online for this driver to function.

    zfssa:>configuration services rest enable
    
  2. Create a new storage pool on the appliance if you do not want to use an existing one. This storage pool is named 'mypool' for the sake of this documentation.

  3. Create a new project and share in the storage pool (mypool) if you do not want to use existing ones. This driver will create a project and share by the names specified in the cinder.conf file, if a project and share by that name does not already exist in the storage pool (mypool). The project and share are named NFSProject and nfs_share‘ in the sample cinder.conf file as entries below.

  4. To perform driver operations, create a role with the following authorizations:

    scope=svc - allow_administer=true, allow_restart=true, allow_configure=true
    scope=nas - pool=pool_name, project=project_name, share=share_name, allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true
    

    The following examples show how to create a role with authorizations.

    zfssa:> configuration roles
    zfssa:configuration roles> role OpenStackRole
    zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack NFS Cinder Driver"
    zfssa:configuration roles OpenStackRole (uncommitted)> commit
    zfssa:configuration roles> select OpenStackRole
    zfssa:configuration roles OpenStackRole> authorizations create
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=svc
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_administer=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_restart=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
    
    zfssa:> configuration roles OpenStackRole authorizations> set scope=nas
    

    The following properties need to be set when the scope of this role needs to be limited to a pool (mypool), a project (NFSProject) and a share (nfs_share) created in the steps above. This will prevent the user assigned to this role from being used to modify other pools, projects and shares.

    zfssa:configuration roles OpenStackRole auth (uncommitted)> set pool=mypool
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set project=NFSProject
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set share=nfs_share
    
  5. The following properties only need to be set when a share and project has not been created following the steps above and wish to allow the driver to create them for you.

    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createProject=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createShare=true
    
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_clone=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeSpaceProps=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_destroy=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rollback=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_takeSnap=true
    zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
    
  6. Create a new user or modify an existing one and assign the new role to the user.

    The following example shows how to create a new user and assign the new role to the user.

    zfssa:> configuration users
    zfssa:configuration users> user cinder
    zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver"
    zfssa:configuration users cinder (uncommitted)> set initial_password=12345
    zfssa:configuration users cinder (uncommitted)> commit
    zfssa:configuration users> select cinder set roles=OpenStackRole
    
  7. Ensure that NFS and HTTP services on the appliance are online. Note the HTTPS port number for later entry in the cinder service configuration file (cinder.conf). This driver uses WebDAV over HTTPS to create snapshots and clones of volumes, and therefore needs to have the HTTP service online.

    The following example illustrates enabling the services and showing their properties.

    zfssa:> configuration services nfs
    zfssa:configuration services nfs> enable
    zfssa:configuration services nfs> show
    Properties:
    <status>= online
    ...
    
    zfssa:configuration services http> enable
    zfssa:configuration services http> show
    Properties:
    <status>= online
    require_login = true
    protocols = http/https
    listen_port = 80
    https_port = 443
    
  8. Create a network interface to be used exclusively for data. An existing network interface may also be used. The following example illustrates how to make a network interface for data traffic flow only.

    Note

    For better performance and reliability, it is recommended to configure a separate subnet exclusively for data traffic in your cloud environment.

    zfssa:> configuration net interfaces
    zfssa:configuration net interfaces> select igbx
    zfssa:configuration net interfaces igbx> set admin=false
    zfssa:configuration net interfaces igbx> commit
    
  9. For clustered controller systems, the following verification is required in addition to the above steps. Skip this step if a standalone system is used.

    zfssa:> configuration cluster resources list
    

    Verify that both the newly created pool and the network interface are of type singleton and are not locked to the current controller. This approach ensures that the pool and the interface used for data always belong to the active controller, regardless of the current state of the cluster. Verify that both the network interface used for management and data, and the storage pool belong to the same head.

    Note

    There will be a short service interruption during failback/takeover, but once the process is complete, the driver should be able to access the ZFSSA for data as well as for management.

Cinder service configuration
  1. Define the following required properties in the cinder.conf configuration file:

    volume_driver = cinder.volume.drivers.zfssa.zfssanfs.ZFSSANFSDriver
    san_ip = myhost
    san_login = username
    san_password = password
    zfssa_data_ip = mydata
    zfssa_nfs_pool = mypool
    

    Note

    Management interface san_ip can be used instead of zfssa_data_ip, but it is not recommended.

  2. You can also define the following additional properties in the cinder.conf configuration file:

    zfssa_nfs_project = NFSProject
    zfssa_nfs_share = nfs_share
    zfssa_nfs_mount_options =
    zfssa_nfs_share_compression = off
    zfssa_nfs_share_logbias = latency
    zfssa_https_port = 443
    

    Note

    The driver does not use the file specified in the nfs_shares_config option.

ZFSSA local cache

The local cache feature enables ZFSSA drivers to serve the usage of bootable volumes significantly better. With the feature, the first bootable volume created from an image is cached, so that subsequent volumes can be created directly from the cache, instead of having image data transferred over the network multiple times.

The following conditions must be met in order to use ZFSSA local cache feature:

  • A storage pool needs to be configured.

  • REST and NFS services need to be turned on.

  • On an OpenStack controller, cinder.conf needs to contain necessary properties used to configure and set up the ZFSSA NFS driver, including the following new properties:

    zfssa_enable_local_cache

    (True/False) To enable/disable the feature.

    zfssa_cache_directory

    The directory name inside zfssa_nfs_share where cache volumes are stored.

Every cache volume has two additional properties stored as WebDAV properties. It is important that they are not altered outside of Block Storage when the driver is in use:

image_id
stores the image id as in Image service.
updated_at
stores the most current timestamp when the image is updated in Image service.
Driver options

The Oracle ZFS Storage Appliance NFS driver supports these options:

Description of ZFS Storage Appliance NFS driver configuration options
Configuration option = Default value Description
[DEFAULT]  
zfssa_cache_directory = os-cinder-cache (String) Name of directory inside zfssa_nfs_share where cache volumes are stored.
zfssa_cache_project = os-cinder-cache (String) Name of ZFSSA project where cache volumes are stored.
zfssa_data_ip = None (String) Data path IP address
zfssa_enable_local_cache = True (Boolean) Flag to enable local caching: True, False.
zfssa_https_port = 443 (String) HTTPS port number
zfssa_manage_policy = loose (String) Driver policy for volume manage.
zfssa_nfs_mount_options = (String) Options to be passed while mounting share over nfs
zfssa_nfs_pool = (String) Storage pool name.
zfssa_nfs_project = NFSProject (String) Project name.
zfssa_nfs_share = nfs_share (String) Share name.
zfssa_nfs_share_compression = off (String) Data compression.
zfssa_nfs_share_logbias = latency (String) Synchronous write bias-latency, throughput.
zfssa_rest_timeout = None (Integer) REST connection timeout. (seconds)

This driver shares additional NFS configuration options with the generic NFS driver. For a description of these, see Description of NFS storage configuration options.

ZTE cinder drivers

The ZTE Cinder drivers allow ZTE KS3200 or KU5200 arrays to be used for Block Storage in OpenStack deployments.

System requirements

To use the ZTE drivers, the following prerequisites:

  • ZTE KS3200 or KU5200 array with:
    • iSCSI or FC interfaces
    • 30B2 firmware or later
  • Network connectivity between the OpenStack host and the array management interfaces
  • HTTPS or HTTP must be enabled on the array
Supported operations
  • Create, delete, attach, and detach volumes.
  • Create, list, and delete volume snapshots.
  • Create a volume from a snapshot.
  • Copy an image to a volume.
  • Copy a volume to an image.
  • Clone a volume.
  • Extend a volume.
  • Migrate a volume with back-end assistance.
  • Retype a volume.
  • Manage and unmanage a volume.
Configuring the array
  1. Verify that the array can be managed using an HTTPS connection. HTTP can also be used if zte_api_protocol=http is placed into the appropriate sections of the cinder.conf file.

    Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.

  2. Edit the cinder.conf file to define a storage back-end entry for each storage pool on the array that will be managed by OpenStack. Each entry consists of a unique section name, surrounded by square brackets, followed by options specified in key=value format.

    • The zte_backend_name value specifies the name of the storage pool on the array.
    • The volume_backend_name option value can be a unique value, if you wish to be able to assign volumes to a specific storage pool on the array, or a name that is shared among multiple storage pools to let the volume scheduler choose where new volumes are allocated.
    • The rest of the options will be repeated for each storage pool in a given array: the appropriate cinder driver name, IP address or host name of the array management interface; the username and password of an array user account with manage privileges; and the iSCSI IP addresses for the array if using the iSCSI transport protocol.

    In the examples below, two back ends are defined, one for pool A and one for pool B, and a common volume_backend_name. Use this for a single volume type definition can be used to allocate volumes from both pools.

    Example: iSCSI back-end entries

    [pool-a]
    zte_backend_name = A
    volume_backend_name = zte-array
    volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    zte_iscsi_ips = 10.2.3.4,10.2.3.5
    
    [pool-b]
    zte_backend_name = B
    volume_backend_name = zte-array
    volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    zte_iscsi_ips = 10.2.3.4,10.2.3.5
    

    Example: Fibre Channel back end entries

    [pool-a]
    zte_backend_name = A
    volume_backend_name = zte-array
    volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
    [pool-b]
    zte_backend_name = B
    volume_backend_name = zte-array
    volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver
    san_ip = 10.1.2.3
    san_login = manage
    san_password = !manage
    
  3. If HTTPS is not enabled in the array, include zte_api_protocol = http in each of the back-end definitions.

  4. If HTTPS is enabled, you can enable certificate verification with the option zte_verify_certificate=True. You may also use the zte_verify_certificate_path parameter to specify the path to a CA_BUNDLE file containing CAs other than those in the default list.

  5. Modify the [DEFAULT] section of the cinder.conf file to add an enabled_backends parameter specifying the back-end entries you added, and a default_volume_type parameter specifying the name of a volume type that you will create in the next step.

    Example: [DEFAULT] section changes

    [DEFAULT]
    ...
    enabled_backends = pool-a,pool-b
    default_volume_type = zte
    ...
    
  6. Create a new volume type for each distinct volume_backend_name value that you added to the cinder.conf file. The example below assumes that the same volume_backend_name=zte-array option was specified in all of the entries, and specifies that the volume type zte can be used to allocate volumes from any of them.

    Example: Creating a volume type

    $ cinder type-create zte
    $ cinder type-key zte set volume_backend_name=zte-array
    
  7. After modifying the cinder.conf file, restart the cinder-volume service.

Driver-specific options

The following table contains the configuration options that are specific to the ZTE drivers.

Description of Zte volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
zteAheadReadSize = 8 (Integer) Cache readahead size.
zteCachePolicy = 1 (Integer) Cache policy. 0, Write Back; 1, Write Through.
zteChunkSize = 4 (Integer) Virtual block size of pool. Unit : KB. Valid value : 4, 8, 16, 32, 64, 128, 256, 512.
zteControllerIP0 = None (IP) Main controller IP.
zteControllerIP1 = None (IP) Slave controller IP.
zteLocalIP = None (IP) Local IP.
ztePoolVoAllocatedPolicy = 0 (Integer) Pool volume allocated policy. 0, Auto; 1, High Performance Tier First; 2, Performance Tier First; 3, Capacity Tier First.
ztePoolVolAlarmStopAllocatedFlag = 0 (Integer) Pool volume alarm stop allocated flag.
ztePoolVolAlarmThreshold = 0 (Integer) Pool volume alarm threshold. [0, 100]
ztePoolVolInitAllocatedCapacity = 0 (Integer) Pool volume init allocated Capacity.Unit : KB.
ztePoolVolIsThin = False (Integer) Whether it is a thin volume.
ztePoolVolMovePolicy = 0 (Integer) Pool volume move policy.0, Auto; 1, Highest Available; 2, Lowest Available; 3, No Relocation.
zteSSDCacheSwitch = 1 (Integer) SSD cache switch. 0, OFF; 1, ON.
zteStoragePool = (List) Pool name list.
zteUserName = None (String) User name.
zteUserPassword = None (String) User password.

To use different volume drivers for the cinder-volume service, use the parameters described in these sections.

The volume drivers are included in the Block Storage repository. To set a volume driver, use the volume_driver flag. The default is:

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

Backup drivers

Ceph backup driver

The Ceph backup driver backs up volumes of any type to a Ceph back-end store. The driver can also detect whether the volume to be backed up is a Ceph RBD volume, and if so, it tries to perform incremental and differential backups.

For source Ceph RBD volumes, you can perform backups within the same Ceph pool (not recommended). You can also perform backups between different Ceph pools and between different Ceph clusters.

At the time of writing, differential backup support in Ceph/librbd was quite new. This driver attempts a differential backup in the first instance. If the differential backup fails, the driver falls back to full backup/copy.

If incremental backups are used, multiple backups of the same volume are stored as snapshots so that minimal space is consumed in the backup store. It takes far less time to restore a volume than to take a full copy.

Note

Block Storage enables you to:

  • Restore to a new volume, which is the default and recommended action.
  • Restore to the original volume from which the backup was taken. The restore action takes a full copy because this is the safest action.

To enable the Ceph backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.ceph

The following configuration options are available for the Ceph backup driver.

Description of Ceph backup driver configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_ceph_chunk_size = 134217728 (Integer) The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store.
backup_ceph_conf = /etc/ceph/ceph.conf (String) Ceph configuration file to use.
backup_ceph_pool = backups (String) The Ceph pool where volume backups are stored.
backup_ceph_stripe_count = 0 (Integer) RBD stripe count to use when creating a backup image.
backup_ceph_stripe_unit = 0 (Integer) RBD stripe unit to use when creating a backup image.
backup_ceph_user = cinder (String) The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None.
restore_discard_excess_bytes = True (Boolean) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes.

This example shows the default options for the Ceph backup driver.

backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
GlusterFS backup driver

The GlusterFS backup driver backs up volumes of any type to GlusterFS.

To enable the GlusterFS backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.glusterfs

The following configuration options are available for the GlusterFS backup driver.

Description of GlusterFS backup driver configuration options
Configuration option = Default value Description
[DEFAULT]  
glusterfs_backup_mount_point = $state_path/backup_mount (String) Base dir containing mount point for gluster share.
glusterfs_backup_share = None (String) GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol
NFS backup driver

The backup driver for the NFS back end backs up volumes of any type to an NFS exported backup repository.

To enable the NFS backup driver, include the following option in the [DEFAULT] section of the cinder.conf file:

backup_driver = cinder.backup.drivers.nfs

The following configuration options are available for the NFS back-end backup driver.

Description of NFS backup driver configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_container = None (String) Custom directory to use for backups.
backup_enable_progress_timer = True (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer.
backup_file_size = 1999994880 (Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
backup_mount_options = None (String) Mount options passed to the NFS client. See NFS man page for details.
backup_mount_point_base = $state_path/backup_mount (String) Base dir containing mount point for NFS share.
backup_sha_block_size_bytes = 32768 (Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes.
backup_share = None (String) NFS share in hostname:path, ipv4addr:path, or “[ipv6addr]:path” format.
POSIX file systems backup driver

The POSIX file systems backup driver backs up volumes of any type to POSIX file systems.

To enable the POSIX file systems backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.posix

The following configuration options are available for the POSIX file systems backup driver.

Description of POSIX backup driver configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_container = None (String) Custom directory to use for backups.
backup_enable_progress_timer = True (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer.
backup_file_size = 1999994880 (Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
backup_posix_path = $state_path/backup (String) Path specifying where to store backups.
backup_sha_block_size_bytes = 32768 (Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes.
Swift backup driver

The backup driver for the swift back end performs a volume backup to an object storage system.

To enable the swift backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.swift

The following configuration options are available for the Swift back-end backup driver.

Description of Swift backup driver configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_swift_auth = per_user (String) Swift authentication mechanism
backup_swift_auth_version = 1 (String) Swift authentication version. Specify “1” for auth 1.0, or “2” for auth 2.0 or “3” for auth 3.0
backup_swift_block_size = 32768 (Integer) The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size.
backup_swift_ca_cert_file = None (String) Location of the CA certificate file to use for swift client requests.
backup_swift_container = volumebackups (String) The default Swift container to use
backup_swift_enable_progress_timer = True (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer.
backup_swift_key = None (String) Swift key for authentication
backup_swift_object_size = 52428800 (Integer) The size in bytes of Swift backup objects
backup_swift_project = None (String) Swift project/account name. Required when connecting to an auth 3.0 system
backup_swift_project_domain = None (String) Swift project domain name. Required when connecting to an auth 3.0 system
backup_swift_retry_attempts = 3 (Integer) The number of retries to make for Swift operations
backup_swift_retry_backoff = 2 (Integer) The backoff time in seconds between Swift retries
backup_swift_tenant = None (String) Swift tenant/account name. Required when connecting to an auth 2.0 system
backup_swift_url = None (String) The URL of the Swift endpoint
backup_swift_user = None (String) Swift user name
backup_swift_user_domain = None (String) Swift user domain name. Required when connecting to an auth 3.0 system
keystone_catalog_info = identity:Identity Service:publicURL (String) Info to match when looking for keystone in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_auth_url is unset
swift_catalog_info = object-store:swift:publicURL (String) Info to match when looking for swift in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_url is unset

To enable the swift backup driver for 1.0, 2.0, or 3.0 authentication version, specify 1, 2, or 3 correspondingly. For example:

backup_swift_auth_version = 2

In addition, the 2.0 authentication system requires the definition of the backup_swift_tenant setting:

backup_swift_tenant = <None>

This example shows the default options for the Swift back-end backup driver.

backup_swift_url = http://localhost:8080/v1/AUTH_
backup_swift_auth_url = http://localhost:5000/v3
backup_swift_auth = per_user
backup_swift_auth_version = 1
backup_swift_user = <None>
backup_swift_user_domain = <None>
backup_swift_key = <None>
backup_swift_container = volumebackups
backup_swift_object_size = 52428800
backup_swift_project = <None>
backup_swift_project_domain = <None>
backup_swift_retry_attempts = 3
backup_swift_retry_backoff = 2
backup_compression_algorithm = zlib
Google Cloud Storage backup driver

The Google Cloud Storage (GCS) backup driver backs up volumes of any type to Google Cloud Storage.

To enable the GCS backup driver, include the following option in the cinder.conf file:

backup_driver = cinder.backup.drivers.google

The following configuration options are available for the GCS backup driver.

Description of GCS backup driver configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_gcs_block_size = 32768 (Integer) The size in bytes that changes are tracked for incremental backups. backup_gcs_object_size has to be multiple of backup_gcs_block_size.
backup_gcs_bucket = None (String) The GCS bucket to use.
backup_gcs_bucket_location = US (String) Location of GCS bucket.
backup_gcs_credential_file = None (String) Absolute path of GCS service account credential file.
backup_gcs_enable_progress_timer = True (Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the GCS backend storage. The default value is True to enable the timer.
backup_gcs_num_retries = 3 (Integer) Number of times to retry.
backup_gcs_object_size = 52428800 (Integer) The size in bytes of GCS backup objects.
backup_gcs_project_id = None (String) Owner project id for GCS bucket.
backup_gcs_proxy_url = None (URI) URL for http proxy access.
backup_gcs_reader_chunk_size = 2097152 (Integer) GCS object will be downloaded in chunks of bytes.
backup_gcs_retry_error_codes = 429 (List) List of GCS error codes.
backup_gcs_storage_class = NEARLINE (String) Storage class of GCS bucket.
backup_gcs_user_agent = gcscinder (String) Http user-agent string for gcs api.
backup_gcs_writer_chunk_size = 2097152 (Integer) GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the file is to be uploaded as a single chunk.
IBM Tivoli Storage Manager backup driver

The IBM Tivoli Storage Manager (TSM) backup driver enables performing volume backups to a TSM server.

The TSM client should be installed and configured on the machine running the cinder-backup service. See the IBM Tivoli Storage Manager Backup-Archive Client Installation and User’s Guide for details on installing the TSM client.

To enable the IBM TSM backup driver, include the following option in cinder.conf:

backup_driver = cinder.backup.drivers.tsm

The following configuration options are available for the TSM backup driver.

Description of IBM Tivoli Storage Manager backup driver configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_tsm_compression = True (Boolean) Enable or Disable compression for backups
backup_tsm_password = password (String) TSM password for the running username
backup_tsm_volume_prefix = backup (String) Volume prefix for the backup id when backing up to TSM

This example shows the default options for the TSM backup driver.

backup_tsm_volume_prefix = backup
backup_tsm_password = password
backup_tsm_compression = True

This section describes how to configure the cinder-backup service and its drivers.

The volume drivers are included with the Block Storage repository. To set a backup driver, use the backup_driver flag. By default there is no backup driver enabled.

Block Storage schedulers

Block Storage service uses the cinder-scheduler service to determine how to dispatch block storage requests.

For more information, see Cinder Scheduler Filters and Cinder Scheduler Weights.

Log files used by Block Storage

The corresponding log file of each Block Storage service is stored in the /var/log/cinder/ directory of the host on which each service runs.

Log files used by Block Storage services
Log file Service/interface (for CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise) Service/interface (for Ubuntu and Debian)
api.log openstack-cinder-api cinder-api
cinder-manage.log cinder-manage cinder-manage
scheduler.log openstack-cinder-scheduler cinder-scheduler
volume.log openstack-cinder-volume cinder-volume

Fibre Channel Zone Manager

The Fibre Channel Zone Manager allows FC SAN Zone/Access control management in conjunction with Fibre Channel block storage. The configuration of Fibre Channel Zone Manager and various zone drivers are described in this section.

Configure Block Storage to use Fibre Channel Zone Manager

If Block Storage is configured to use a Fibre Channel volume driver that supports Zone Manager, update cinder.conf to add the following configuration options to enable Fibre Channel Zone Manager.

Make the following changes in the /etc/cinder/cinder.conf file.

Description of zoning configuration options
Configuration option = Default value Description
[DEFAULT]  
zoning_mode = None (String) FC Zoning mode configured
[fc-zone-manager]  
enable_unsupported_driver = False (Boolean) Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release.
fc_fabric_names = None (String) Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric
fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService (String) FC SAN Lookup Service
zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver (String) FC Zone Driver responsible for zone management
zoning_policy = initiator-target (String) Zoning policy configured by user; valid values include “initiator-target” or “initiator”

To use different Fibre Channel Zone Drivers, use the parameters described in this section.

Note

When multi backend configuration is used, provide the zoning_mode configuration option as part of the volume driver configuration where volume_driver option is specified.

Note

Default value of zoning_mode is None and this needs to be changed to fabric to allow fabric zoning.

Note

zoning_policy can be configured as initiator-target or initiator

Brocade Fibre Channel Zone Driver

Brocade Fibre Channel Zone Driver performs zoning operations through HTTP, HTTPS, or SSH.

Set the following options in the cinder.conf configuration file.

Description of brocade zoning manager configuration options
Configuration option = Default value Description
[fc-zone-manager]  
brcd_sb_connector = HTTP (String) South bound connector for zoning operation

Configure SAN fabric parameters in the form of fabric groups as described in the example below:

Description of brocade zoning fabrics configuration options
Configuration option = Default value Description
[BRCD_FABRIC_EXAMPLE]  
fc_fabric_address = (String) Management IP of fabric.
fc_fabric_password = (String) Password for user.
fc_fabric_port = 22 (Port number) Connecting port
fc_fabric_ssh_cert_path = (String) Local SSH certificate Path.
fc_fabric_user = (String) Fabric user ID.
fc_southbound_protocol = HTTP (String) South bound connector for the fabric.
fc_virtual_fabric_id = None (String) Virtual Fabric ID.
principal_switch_wwn = None (String) DEPRECATED: Principal switch WWN of the fabric. This option is not used anymore.
zone_activate = True (Boolean) Overridden zoning activation state.
zone_name_prefix = openstack (String) Overridden zone name prefix.
zoning_policy = initiator-target (String) Overridden zoning policy.

Note

Define a fabric group for each fabric using the fabric names used in fc_fabric_names configuration option as group name.

Note

To define a fabric group for a switch which has Virtual Fabrics enabled, include the fc_virtual_fabric_id configuration option and fc_southbound_protocol configuration option set to HTTP or HTTPS in the fabric group. Zoning on VF enabled fabric using SSH southbound protocol is not supported.

System requirements

Brocade Fibre Channel Zone Driver requires firmware version FOS v6.4 or higher.

As a best practice for zone management, use a user account with zoneadmin role. Users with admin role (including the default admin user account) are limited to a maximum of two concurrent SSH sessions.

For information about how to manage Brocade Fibre Channel switches, see the Brocade Fabric OS user documentation.

Cisco Fibre Channel Zone Driver

Cisco Fibre Channel Zone Driver automates the zoning operations through SSH. Configure Cisco Zone Driver, Cisco Southbound connector, FC SAN lookup service and Fabric name.

Set the following options in the cinder.conf configuration file.

[fc-zone-manager]
zone_driver = cinder.zonemanager.drivers.cisco.cisco_fc_zone_driver.CiscoFCZoneDriver
fc_san_lookup_service = cinder.zonemanager.drivers.cisco.cisco_fc_san_lookup_service.CiscoFCSanLookupService
fc_fabric_names = CISCO_FABRIC_EXAMPLE
cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
Description of cisco zoning manager configuration options
Configuration option = Default value Description
[fc-zone-manager]  
cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI (String) Southbound connector for zoning operation

Configure SAN fabric parameters in the form of fabric groups as described in the example below:

Description of cisco zoning fabrics configuration options
Configuration option = Default value Description
[CISCO_FABRIC_EXAMPLE]  
cisco_fc_fabric_address = (String) Management IP of fabric
cisco_fc_fabric_password = (String) Password for user
cisco_fc_fabric_port = 22 (Port number) Connecting port
cisco_fc_fabric_user = (String) Fabric user ID
cisco_zone_activate = True (Boolean) overridden zoning activation state
cisco_zone_name_prefix = None (String) overridden zone name prefix
cisco_zoning_policy = initiator-target (String) overridden zoning policy
cisco_zoning_vsan = None (String) VSAN of the Fabric

Note

Define a fabric group for each fabric using the fabric names used in fc_fabric_names configuration option as group name.

The Cisco Fibre Channel Zone Driver supports basic and enhanced zoning modes.The zoning VSAN must exist with an active zone set name which is same as the fc_fabric_names option.

System requirements

Cisco MDS 9000 Family Switches.

Cisco MDS NX-OS Release 6.2(9) or later.

For information about how to manage Cisco Fibre Channel switches, see the Cisco MDS 9000 user documentation.

Nested quotas

Nested quota is a change in how OpenStack services (such as Block Storage and Compute) handle their quota resources by being hierarchy-aware. The main reason for this change is to fully appreciate the hierarchical multi-tenancy concept, which was introduced in keystone in the Kilo release.

Once you have a project hierarchy created in keystone, nested quotas let you define how much of a project’s quota you want to give to its subprojects. In that way, hierarchical projects can have hierarchical quotas (also known as nested quotas).

Projects and subprojects have similar behaviors, but they differ from each other when it comes to default quota values. The default quota value for resources in a subproject is 0, so that when a subproject is created it will not consume all of its parent’s quota.

In order to keep track of how much of each quota was allocated to a subproject, a column allocated was added to the quotas table. This column is updated after every delete and update quota operation.

This example shows you how to use nested quotas.

Note

Assume that you have created a project hierarchy in keystone, such as follows:

+-----------+
|           |
|     A     |
|    / \    |
|   B   C   |
|  /        |
| D         |
+-----------+
Getting default quotas
  1. Get the quota for root projects.

    Use the cinder quota-show command and specify:

    • The TENANT_ID of the relevant project. In this case, the id of project A.

      $ cinder quota-show TENANT_ID
      +-----------------------+-------+
      |        Property       | Value |
      +-----------------------+-------+
      |    backup_gigabytes   |  1000 |
      |        backups        |   10  |
      |       gigabytes       |  1000 |
      | gigabytes_lvmdriver-1 |   -1  |
      |  per_volume_gigabytes |   -1  |
      |       snapshots       |   10  |
      | snapshots_lvmdriver-1 |   -1  |
      |        volumes        |   10  |
      |  volumes_lvmdriver-1  |   -1  |
      +-----------------------+-------+
      

      Note

      This command returns the default values for resources. This is because the quotas for this project were not explicitly set.

  2. Get the quota for subprojects.

    In this case, use the same quota-show command and specify:

    • The TENANT_ID of the relevant project. In this case the id of project B, which is a child of A.

      $ cinder quota-show TENANT_ID
      +-----------------------+-------+
      |        Property       | Value |
      +-----------------------+-------+
      |    backup_gigabytes   |   0   |
      |        backups        |   0   |
      |       gigabytes       |   0   |
      | gigabytes_lvmdriver-1 |   0   |
      |  per_volume_gigabytes |   0   |
      |       snapshots       |   0   |
      | snapshots_lvmdriver-1 |   0   |
      |        volumes        |   0   |
      |  volumes_lvmdriver-1  |   0   |
      +-----------------------+-------+
      

      Note

      In this case, 0 was the value returned as the quota for all the resources. This is because project B is a subproject of A, thus, the default quota value is 0, so that it will not consume all the quota of its parent project.

Setting the quotas for subprojects

Now that the projects were created, assume that the admin of project B wants to use it. First of all, you need to set the quota limit of the project, because as a subproject it does not have quotas allocated by default.

In this example, when all of the parent project is allocated to its subprojects the user will not be able to create more resources in the parent project.

  1. Update the quota of B.

    Use the quota-update command and specify:

    • The TENANT_ID of the relevant project. In this case the id of project B.

    • The --volumes option, followed by the number to which you wish to increase the volumes quota.

      $ cinder quota-update TENANT_ID --volumes 10
      +-----------------------+-------+
      |        Property       | Value |
      +-----------------------+-------+
      |    backup_gigabytes   |   0   |
      |        backups        |   0   |
      |       gigabytes       |   0   |
      | gigabytes_lvmdriver-1 |   0   |
      |  per_volume_gigabytes |   0   |
      |       snapshots       |   0   |
      | snapshots_lvmdriver-1 |   0   |
      |        volumes        |   10  |
      |  volumes_lvmdriver-1  |   0   |
      +-----------------------+-------+
      

      Note

      The volumes resource quota is updated.

  2. Try to create a volume in project A.

    Use the create command and specify:

    • The SIZE of the volume that will be created;

    • The NAME of the volume.

      $ cinder create --size SIZE NAME
      VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded for quota 'volumes'. (HTTP 413) (Request-ID: req-f6f7cc89-998e-4a82-803d-c73c8ee2016c)
      

      Note

      As the entirety of project A’s volumes quota has been assigned to project B, it is treated as if all of the quota has been used. This is true even when project B has not created any volumes.

See cinder nested quota spec and hierarchical multi-tenancy spec for details.

Volume encryption supported by the key manager

We recommend the Key management service (barbican) for storing encryption keys used by the OpenStack volume encryption feature. It can be enabled by updating cinder.conf and nova.conf.

Initial configuration

Configuration changes need to be made to any nodes running the cinder-api or nova-compute server.

Steps to update cinder-api servers:

  1. Edit the /etc/cinder/cinder.conf file to use Key management service as follows:

    • Look for the [key_manager] section.

    • Enter a new line directly below [key_manager] with the following:

      api_class = cinder.key_manager.barbican.BarbicanKeyManager
      

      Note

      Use a ‘#’ prefix to comment out the line in this section that begins with ‘fixed_key’.

  2. Restart cinder-api.

Update nova-compute servers:

  1. Install the cryptsetup utility and the python-barbicanclient Python package.

  2. Set up the Key Manager service by editing /etc/nova/nova.conf:

    [key_manager]
    api_class = nova.key_manager.barbican.BarbicanKeyManager
    
  3. Restart nova-compute.

Create an encrypted volume type

Block Storage volume type assignment provides scheduling to a specific back-end, and can be used to specify actionable information for a back-end storage device.

This example creates a volume type called LUKS and provides configuration information for the storage system to encrypt or decrypt the volume.

  1. Source your admin credentials:

    $ . admin-openrc.sh
    
  2. Create the volume type:

    $ cinder type-create LUKS
    +--------------------------------------+-------+
    |                  ID                  |  Name |
    +--------------------------------------+-------+
    | e64b35a4-a849-4c53-9cc7-2345d3c8fbde | LUKS  |
    +--------------------------------------+-------+
    
  3. Mark the volume type as encrypted and provide the necessary details. Use --control_location to specify where encryption is performed: front-end (default) or back-end.

    $ cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 \
      --control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
    +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+
    |            Volume Type ID            |                  Provider                 |      Cipher     | Key Size | Control Location |
    +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+
    | e64b35a4-a849-4c53-9cc7-2345d3c8fbde | nova.volume.encryptors.luks.LuksEncryptor | aes-xts-plain64 |   512    |    front-end     |
    +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+
    

The OpenStack dashboard (horizon) supports creating the encrypted volume type as of the Kilo release. For instructions, see Create an encrypted volume type.

Create an encrypted volume

Use the OpenStack dashboard (horizon), or the cinder command to create volumes just as you normally would. For an encrypted volume, pass the --volume-type LUKS flag, which denotes that the volume will be of encrypted type LUKS. If that argument is left out, the default volume type, unencrypted, is used.

  1. Source your admin credentials:

    $ . admin-openrc.sh
    
  2. Create an unencrypted 1 GB test volume:

    $ cinder create --display-name 'unencrypted volume' 1
    +--------------------------------+--------------------------------------+
    |            Property            |                Value                 |
    +--------------------------------+--------------------------------------+
    |          attachments           |                  []                  |
    |       availability_zone        |                 nova                 |
    |            bootable            |                false                 |
    |           created_at           |      2014-08-10T01:24:03.000000      |
    |          description           |                 None                 |
    |           encrypted            |                False                 |
    |               id               | 081700fd-2357-44ff-860d-2cd78ad9c568 |
    |            metadata            |                  {}                  |
    |              name              |          unencrypted volume          |
    |     os-vol-host-attr:host      |              controller              |
    | os-vol-mig-status-attr:migstat |                 None                 |
    | os-vol-mig-status-attr:name_id |                 None                 |
    |  os-vol-tenant-attr:tenant_id  |   08fdea76c760475f82087a45dbe94918   |
    |              size              |                  1                   |
    |          snapshot_id           |                 None                 |
    |          source_volid          |                 None                 |
    |             status             |               creating               |
    |            user_id             |   7cbc6b58b372439e8f70e2a9103f1332   |
    |          volume_type           |                 None                 |
    +--------------------------------+--------------------------------------+
    
  3. Create an encrypted 1 GB test volume:

    $ cinder create --display-name 'encrypted volume' --volume-type LUKS 1
    +--------------------------------+--------------------------------------+
    |            Property            |                Value                 |
    +--------------------------------+--------------------------------------+
    |          attachments           |                  []                  |
    |       availability_zone        |                 nova                 |
    |            bootable            |                false                 |
    |           created_at           |      2014-08-10T01:24:24.000000      |
    |          description           |                 None                 |
    |           encrypted            |                 True                 |
    |               id               | 86060306-6f43-4c92-9ab8-ddcd83acd973 |
    |            metadata            |                  {}                  |
    |              name              |           encrypted volume           |
    |     os-vol-host-attr:host      |              controller              |
    | os-vol-mig-status-attr:migstat |                 None                 |
    | os-vol-mig-status-attr:name_id |                 None                 |
    |  os-vol-tenant-attr:tenant_id  |   08fdea76c760475f82087a45dbe94918   |
    |              size              |                  1                   |
    |          snapshot_id           |                 None                 |
    |          source_volid          |                 None                 |
    |             status             |               creating               |
    |            user_id             |   7cbc6b58b372439e8f70e2a9103f1332   |
    |          volume_type           |                 LUKS                 |
    +--------------------------------+--------------------------------------+
    

Notice the encrypted parameter; it will show True or False. The option volume_type is also shown for easy review.

Note

Due to the issue that some of the volume drivers do not set encrypted flag, attaching of encrypted volumes to a virtual guest will fail, because OpenStack Compute service will not run encryption providers.

Testing volume encryption

This is a simple test scenario to help validate your encryption. It assumes an LVM based Block Storage server.

Perform these steps after completing the volume encryption setup and creating the volume-type for LUKS as described in the preceding sections.

  1. Create a VM:

    $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-disk vm-test
    
  2. Create two volumes, one encrypted and one not encrypted then attach them to your VM:

    $ cinder create --display-name 'unencrypted volume' 1
    $ cinder create --display-name 'encrypted volume' --volume-type LUKS 1
    $ cinder list
    +--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
    |                  ID                  |   Status  |        Name        | Size | Volume Type | Bootable | Attached to |
    +--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
    | 64b48a79-5686-4542-9b52-d649b51c10a2 | available | unencrypted volume |  1   |     None    |  false   |             |
    | db50b71c-bf97-47cb-a5cf-b4b43a0edab6 | available |  encrypted volume  |  1   |     LUKS    |  false   |             |
    +--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
    $ nova volume-attach vm-test 64b48a79-5686-4542-9b52-d649b51c10a2 /dev/vdb
    $ nova volume-attach vm-test db50b71c-bf97-47cb-a5cf-b4b43a0edab6 /dev/vdc
    
  3. On the VM, send some text to the newly attached volumes and synchronize them:

    # echo "Hello, world (unencrypted /dev/vdb)" >> /dev/vdb
    # echo "Hello, world (encrypted /dev/vdc)" >> /dev/vdc
    # sync && sleep 2
    # sync && sleep 2
    
  4. On the system hosting cinder volume services, synchronize to flush the I/O cache then test to see if your strings can be found:

    # sync && sleep 2
    # sync && sleep 2
    # strings /dev/stack-volumes/volume-* | grep "Hello"
    Hello, world (unencrypted /dev/vdb)
    

In the above example you see that the search returns the string written to the unencrypted volume, but not the encrypted one.

Additional options

These options can also be set in the cinder.conf file.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
api_rate_limit = True (Boolean) Enables or disables rate limit of the API.
az_cache_duration = 3600 (Integer) Cache volume availability zones in memory for the provided duration in seconds
backend_host = None (String) Backend override of host value.
default_timeout = 31536000 (Integer) Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long.
enable_v1_api = True (Boolean) DEPRECATED: Deploy v1 of the Cinder API.
enable_v2_api = True (Boolean) DEPRECATED: Deploy v2 of the Cinder API.
enable_v3_api = True (Boolean) Deploy v3 of the Cinder API.
extra_capabilities = {} (String) User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties.
ignore_pool_full_threshold = False (Boolean) Force LUN creation even if the full threshold of pool is reached. By default, the value is False.
management_ips = (String) List of Management IP addresses (separated by commas)
message_ttl = 2592000 (Integer) message minimum life in seconds.
osapi_max_limit = 1000 (Integer) The maximum number of items that a collection resource returns in a single response
osapi_max_request_body_size = 114688 (Integer) Max size for body of a request
osapi_volume_base_URL = None (String) Base URL that will be presented to users in links to the OpenStack Volume API
osapi_volume_ext_list = (List) Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions
osapi_volume_extension = ['cinder.api.contrib.standard_extensions'] (Multi-valued) osapi volume extension to load
osapi_volume_listen = 0.0.0.0 (String) IP address on which OpenStack Volume API listens
osapi_volume_listen_port = 8776 (Port number) Port on which OpenStack Volume API listens
osapi_volume_use_ssl = False (Boolean) Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified.
osapi_volume_workers = None (Integer) Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available.
per_volume_size_limit = -1 (Integer) Max size allowed per volume, in gigabytes
public_endpoint = None (String) Public url to use for versions endpoint. The default is None, which will use the request’s host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy’s URL.
query_volume_filters = name, status, metadata, availability_zone, bootable, group_id (List) Volume filter options which non-admin user could use to query volumes. Default values are: [‘name’, ‘status’, ‘metadata’, ‘availability_zone’ ,’bootable’, ‘group_id’]
transfer_api_class = cinder.transfer.api.API (String) The full class name of the volume transfer API class
volume_api_class = cinder.volume.api.API (String) The full class name of the volume API class to use
volume_name_prefix = openstack- (String) Prefix before volume name to differentiate DISCO volume created through openstack and the other ones
volume_name_template = volume-%s (String) Template string to be used to generate volume names
volume_number_multiplier = -1.0 (Floating point) Multiplier used for weighing volume number. Negative numbers mean to spread vs stack.
volume_transfer_key_length = 16 (Integer) The number of characters in the autogenerated auth key.
volume_transfer_salt_length = 8 (Integer) The number of characters in the salt.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_versionedobjects]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal
Description of authorization configuration options
Configuration option = Default value Description
[DEFAULT]  
auth_strategy = keystone (String) The strategy to use for auth. Supports noauth or keystone.
Description of backups configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_api_class = cinder.backup.api.API (String) The full class name of the volume backup API class
backup_compression_algorithm = zlib (String) Compression algorithm (None to disable)
backup_driver = cinder.backup.drivers.swift (String) Driver to use for backups.
backup_manager = cinder.backup.manager.BackupManager (String) Full class name for the Manager for volume backup
backup_metadata_version = 2 (Integer) Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version.
backup_name_template = backup-%s (String) Template string to be used to generate backup names
backup_object_number_per_notification = 10 (Integer) The number of chunks or objects, for which one Ceilometer notification will be sent
backup_service_inithost_offload = True (Boolean) Offload pending backup delete during backup service startup. If false, the backup service will remain down until all pending backups are deleted.
backup_timer_interval = 120 (Integer) Interval, in seconds, between two progress notifications reporting the backup status
backup_use_same_host = False (Boolean) Backup services use same backend.
backup_use_temp_snapshot = False (Boolean) If this is set to True, the backup_use_temp_snapshot path will be used during the backup. Otherwise, it will use backup_use_temp_volume path.
snapshot_check_timeout = 3600 (Integer) How long we check whether a snapshot is finished before we give up
snapshot_name_template = snapshot-%s (String) Template string to be used to generate snapshot names
snapshot_same_host = True (Boolean) Create volume from snapshot at the host where snapshot resides
Description of block device configuration options
Configuration option = Default value Description
[DEFAULT]  
available_devices = (List) List of all available devices
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
allow_availability_zone_fallback = False (Boolean) If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing.
chap = disabled (String) CHAP authentication mode, effective only for iscsi (disabled|enabled)
chap_password = (String) Password for specified CHAP account name.
chap_username = (String) CHAP user name.
chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf (String) Chiscsi (CXT) global defaults configuration file
cinder_internal_tenant_project_id = None (String) ID of the project which will be used as the Cinder internal tenant.
cinder_internal_tenant_user_id = None (String) ID of the user to be used in volume operations as the Cinder internal tenant.
cluster = None (String) Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. Active-Active is not yet supported.
compute_api_class = cinder.compute.nova.API (String) The full class name of the compute API class to use
connection_type = iscsi (String) Connection type to the IBM Storage Array
consistencygroup_api_class = cinder.consistencygroup.api.API (String) The full class name of the consistencygroup API class
default_availability_zone = None (String) Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes.
default_group_type = None (String) Default group type to use
default_volume_type = None (String) Default volume type to use
driver_client_cert = None (String) The path to the client certificate for verification, if the driver supports it.
driver_client_cert_key = None (String) The path to the client certificate key for verification, if the driver supports it.
driver_data_namespace = None (String) Namespace for driver private data values to be saved in.
driver_ssl_cert_path = None (String) Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend
driver_ssl_cert_verify = False (Boolean) If set to True the http client will validate the SSL certificate of the backend endpoint.
enable_force_upload = False (Boolean) Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it.
enable_new_services = True (Boolean) Services to be added to the available pool on create
enable_unsupported_driver = False (Boolean) Set this to True when you want to allow an unsupported driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release.
end_time = None (String) If this option is specified then the end time specified is used instead of the end time of the last completed audit period.
enforce_multipath_for_image_xfer = False (Boolean) If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal.
group_api_class = cinder.group.api.API (String) The full class name of the group API class
host = localhost (String) Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address.
iet_conf = /etc/iet/ietd.conf (String) IET configuration file
iscsi_secondary_ip_addresses = (List) The list of secondary IP addresses of the iSCSI daemon
max_over_subscription_ratio = 20.0 (Floating point) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. The ratio has to be a minimum of 1.0.
monkey_patch = False (Boolean) Enable monkey patching
monkey_patch_modules = (List) List of modules/decorators to monkey patch
my_ip = 10.0.0.1 (String) IP address of this host
no_snapshot_gb_quota = False (Boolean) Whether snapshots count against gigabyte quota
num_shell_tries = 3 (Integer) Number of times to attempt to run flakey shell commands
os_privileged_user_auth_url = None (String) Auth URL associated with the OpenStack privileged account.
os_privileged_user_name = None (String) OpenStack privileged account username. Used for requests to other services (such as Nova) that require an account with special rights.
os_privileged_user_password = None (String) Password associated with the OpenStack privileged account.
os_privileged_user_tenant = None (String) Tenant name associated with the OpenStack privileged account.
periodic_fuzzy_delay = 60 (Integer) Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
periodic_interval = 60 (Integer) Interval, in seconds, between running periodic tasks
replication_api_class = cinder.replication.api.API (String) The full class name of the volume replication API class
replication_device = None (Unknown) Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2...
report_discard_supported = False (Boolean) Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used.
report_interval = 10 (Integer) Interval, in seconds, between nodes reporting state to datastore
reserved_percentage = 0 (Integer) The percentage of backend capacity is reserved
rootwrap_config = /etc/cinder/rootwrap.conf (String) Path to the rootwrap configuration file to use for running commands as root
send_actions = False (Boolean) Send the volume and snapshot create and delete notifications generated in the specified period.
service_down_time = 60 (Integer) Maximum time since last check-in for a service to be considered up
ssh_hosts_key_file = $state_path/ssh_known_hosts (String) File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=$state_path/ssh_known_hosts
start_time = None (String) If this option is specified then the start time specified is used instead of the start time of the last completed audit period.
state_path = /var/lib/cinder (String) Top-level directory for maintaining cinder’s state
storage_availability_zone = nova (String) Availability zone of this node
storage_protocol = iscsi (String) Protocol for transferring data between host and storage back-end.
strict_ssh_host_key_policy = False (Boolean) Option to enable strict host key checking. When set to “True” Cinder will only connect to systems with a host key present in the configured “ssh_hosts_key_file”. When set to “False” the host key will be saved upon first connection and used for subsequent connections. Default=False
suppress_requests_ssl_warnings = False (Boolean) Suppress requests library SSL certificate warnings.
tcp_keepalive = True (Boolean) Sets the value of TCP_KEEPALIVE (True/False) for each server socket.
tcp_keepalive_count = None (Integer) Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
tcp_keepalive_interval = None (Integer) Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X.
until_refresh = 0 (Integer) Count of reservations until usage is refreshed
use_chap_auth = False (Boolean) Option to enable/disable CHAP authentication for targets.
use_forwarded_for = False (Boolean) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy.
[key_manager]  
api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager (String) The full class name of the key manager API class
fixed_key = None (String) Fixed key returned by key manager, specified in hex
Description of Compute configuration options
Configuration option = Default value Description
[DEFAULT]  
nova_api_insecure = False (Boolean) Allow to perform insecure SSL requests to nova
nova_ca_certificates_file = None (String) Location of ca certificates file to use for nova client requests.
nova_catalog_admin_info = compute:Compute Service:adminURL (String) Same as nova_catalog_info, but for admin endpoint.
nova_catalog_info = compute:Compute Service:publicURL (String) Match this value when searching for nova in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type>
nova_endpoint_admin_template = None (String) Same as nova_endpoint_template, but for admin endpoint.
nova_endpoint_template = None (String) Override service catalog lookup with template for nova endpoint e.g. http://localhost:8774/v2/%(project_id)s
os_region_name = None (String) Region name of this node
Description of Coordination configuration options
Configuration option = Default value Description
[coordination]  
backend_url = file://$state_path (String) The backend URL to use for distributed coordination.
heartbeat = 1.0 (Floating point) Number of seconds between heartbeats for distributed coordination.
initial_reconnect_backoff = 0.1 (Floating point) Initial number of seconds to wait after failed reconnection.
max_reconnect_backoff = 60.0 (Floating point) Maximum number of seconds between sequential reconnection retries.
Description of logging configuration options
Configuration option = Default value Description
[DEFAULT]  
trace_flags = None (List) List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api.
Description of DRBD configuration options
Configuration option = Default value Description
[DEFAULT]  
drbdmanage_devs_on_controller = True (Boolean) If set, the c-vol node will receive a useable /dev/drbdX device, even if the actual data is stored on other nodes only. This is useful for debugging, maintenance, and to be able to do the iSCSI export from the c-vol node.
drbdmanage_disk_options = {"c-min-rate": "4M"} (String) Disk options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"} (String) Net options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
drbdmanage_redundancy = 1 (Integer) Number of nodes that should replicate the data.
drbdmanage_resize_plugin = drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize (String) Volume resize completion wait plugin.
drbdmanage_resize_policy = {"timeout": "60"} (String) Volume resize completion wait policy.
drbdmanage_resource_options = {"auto-promote-timeout": "300"} (String) Resource options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
drbdmanage_resource_plugin = drbdmanage.plugins.plugins.wait_for.WaitForResource (String) Resource deployment completion wait plugin.
drbdmanage_resource_policy = {"ratio": "0.51", "timeout": "60"} (String) Resource deployment completion wait policy.
drbdmanage_snapshot_plugin = drbdmanage.plugins.plugins.wait_for.WaitForSnapshot (String) Snapshot completion wait plugin.
drbdmanage_snapshot_policy = {"count": "1", "timeout": "60"} (String) Snapshot completion wait policy.
Description of EMC configuration options
Configuration option = Default value Description
[DEFAULT]  
check_max_pool_luns_threshold = False (Boolean) Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False.
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml (String) use this file for cinder emc plugin config data
destroy_empty_storage_group = False (Boolean) To destroy storage group when the last LUN is removed from it. By default, the value is False.
force_delete_lun_in_storagegroup = False (Boolean) Delete a LUN even if it is in Storage Groups. By default, the value is False.
initiator_auto_deregistration = False (Boolean) Automatically deregister initiators after the related storage group is destroyed. By default, the value is False.
initiator_auto_registration = False (Boolean) Automatically register initiators. By default, the value is False.
io_port_list = None (List) Comma separated iSCSI or FC ports to be used in Nova or Cinder.
iscsi_initiators = None (String) Mapping between hostname and its iSCSI initiator IP addresses.
max_luns_per_storage_group = 255 (Integer) Default max number of LUNs in a storage group. By default, the value is 255.
naviseccli_path = None (String) Naviseccli Path.
storage_vnx_authentication_type = global (String) VNX authentication scope type. By default, the value is global.
storage_vnx_pool_names = None (List) Comma-separated list of storage pool names to be used.
storage_vnx_security_file_dir = None (String) Directory path that contains the VNX security file. Make sure the security file is generated first.
Description of Eternus volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml (String) config file for cinder eternus_dx volume driver
Description of IBM FlashSystem volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
flashsystem_connection_protocol = FC (String) Connection protocol should be FC. (Default is FC.)
flashsystem_iscsi_portid = 0 (Integer) Default iSCSI Port ID of FlashSystem. (Default port is 0.)
flashsystem_multihostmap_enabled = True (Boolean) Allows vdisk to multi host mapping. (Default is True)
flashsystem_multipath_enabled = False (Boolean) DEPRECATED: This option no longer has any affect. It is deprecated and will be removed in the next release.
Description of HGST volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hgst_net = Net 1 (IPv4) (String) Space network name to use for data transfer
hgst_redundancy = 0 (String) Should spaces be redundantly stored (1/0)
hgst_space_group = disk (String) Group to own created spaces
hgst_space_mode = 0600 (String) UNIX mode for created spaces
hgst_space_user = root (String) User to own created spaces
hgst_storage_servers = os:gbd0 (String) Comma separated list of Space storage servers:devices. ex: os1_stor:gbd0,os2_stor:gbd0
Description of HPE LeftHand/StoreVirtual driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hpelefthand_api_url = None (String) HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos
hpelefthand_clustername = None (String) HPE LeftHand cluster name
hpelefthand_debug = False (Boolean) Enable HTTP debugging to LeftHand
hpelefthand_iscsi_chap_enabled = False (Boolean) Configure CHAP authentication for iSCSI connections (Default: Disabled)
hpelefthand_password = None (String) HPE LeftHand Super user password
hpelefthand_ssh_port = 16022 (Port number) Port number of SSH service.
hpelefthand_username = None (String) HPE LeftHand Super user username
Description of HPE XP volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hpexp_async_copy_check_interval = 10 (Integer) Interval to check copy asynchronously
hpexp_compute_target_ports = None (List) Target port names of compute node for host group or iSCSI target
hpexp_copy_check_interval = 3 (Integer) Interval to check copy
hpexp_copy_speed = 3 (Integer) Copy speed of storage system
hpexp_default_copy_method = FULL (String) Default copy method of storage system. There are two valid values: “FULL” specifies that a full copy; “THIN” specifies that a thin copy. Default value is “FULL”
hpexp_group_request = False (Boolean) Request for creating host group or iSCSI target
hpexp_horcm_add_conf = True (Boolean) Add to HORCM configuration
hpexp_horcm_name_only_discovery = False (Boolean) Only discover a specific name of host group or iSCSI target
hpexp_horcm_numbers = 200, 201 (List) Instance numbers for HORCM
hpexp_horcm_resource_name = meta_resource (String) Resource group name of storage system for HORCM
hpexp_horcm_user = None (String) Username of storage system for HORCM
hpexp_ldev_range = None (String) Logical device range of storage system
hpexp_pool = None (String) Pool of storage system
hpexp_storage_cli = None (String) Type of storage command line interface
hpexp_storage_id = None (String) ID of storage system
hpexp_target_ports = None (List) Target port names for host group or iSCSI target
hpexp_thin_pool = None (String) Thin pool of storage system
hpexp_zoning_request = False (Boolean) Request for FC Zone creating host group
Description of Huawei storage driver configuration options
Configuration option = Default value Description
[DEFAULT]  
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml (String) The configuration file for the Cinder Huawei driver.
hypermetro_devices = None (String) The remote device hypermetro will use.
metro_domain_name = None (String) The remote metro device domain name.
metro_san_address = None (String) The remote metro device request url.
metro_san_password = None (String) The remote metro device san password.
metro_san_user = None (String) The remote metro device san user.
metro_storage_pools = None (String) The remote metro device pool names.
Description of HyperV volume driver configuration options
Configuration option = Default value Description
[hyperv]  
force_volumeutils_v1 = False (Boolean) DEPRECATED: Force V1 volume utility class
Description of images configuration options
Configuration option = Default value Description
[DEFAULT]  
allowed_direct_url_schemes = (List) A list of url schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file].
glance_api_insecure = False (Boolean) Allow to perform insecure SSL (https) requests to glance (https will be used but cert validation will not be performed).
glance_api_servers = None (List) A list of the URLs of glance API servers available to cinder ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to http.
glance_api_ssl_compression = False (Boolean) Enables or disables negotiation of SSL layer compression. In some cases disabling compression can improve data throughput, such as when high network bandwidth is available and you use compressed image formats like qcow2.
glance_api_version = 1 (Integer) Version of the glance API to use
glance_ca_certificates_file = None (String) Location of ca certificates file to use for glance client requests.
glance_catalog_info = image:glance:publicURL (String) Info to match when looking for glance in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if glance_api_servers are not provided.
glance_core_properties = checksum, container_format, disk_format, image_name, image_id, min_disk, min_ram, name, size (List) Default core properties of image
glance_num_retries = 0 (Integer) Number retries when downloading an image from glance
glance_request_timeout = None (Integer) http/https timeout value for glance operations. If no value (None) is supplied here, the glanceclient default value is used.
image_conversion_dir = $state_path/conversion (String) Directory used for temporary storage during image conversion
image_upload_use_cinder_backend = False (Boolean) If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service, and glance_api_version must be set to 2.
image_upload_use_internal_tenant = False (Boolean) If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context’s tenant.
image_volume_cache_enabled = False (Boolean) Enable the image volume cache for this backend.
image_volume_cache_max_count = 0 (Integer) Max number of entries allowed in the image volume cache. 0 => unlimited.
image_volume_cache_max_size_gb = 0 (Integer) Max size of the image volume cache for this backend in GB. 0 => unlimited.
use_multipath_for_image_xfer = False (Boolean) Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers?
Description of Infortrend volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
infortrend_cli_max_retries = 5 (Integer) Maximum retry time for cli. Default is 5.
infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar (String) The Infortrend CLI absolute path. By default, it is at /opt/bin/Infortrend/raidcmd_ESDS10.jar
infortrend_cli_timeout = 30 (Integer) Default timeout for CLI copy operations in minutes. Support: migrate volume, create cloned volume and create volume from snapshot. By Default, it is 30 minutes.
infortrend_pools_name = (String) Infortrend raid pool name list. It is separated with comma.
infortrend_provisioning = full (String) Let the volume use specific provisioning. By default, it is the full provisioning. The supported options are full or thin.
infortrend_slots_a_channels_id = 0,1,2,3,4,5,6,7 (String) Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. By default, it is the channel 0~7.
infortrend_slots_b_channels_id = 0,1,2,3,4,5,6,7 (String) Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. By default, it is the channel 0~7.
infortrend_tiering = 0 (String) Let the volume use specific tiering level. By default, it is the level 0. The supported levels are 0,2,3,4.
Description of NAS configuration options
Configuration option = Default value Description
[DEFAULT]  
nas_host = (String) IP address or Hostname of NAS system.
nas_login = admin (String) User name to connect to NAS system.
nas_mount_options = None (String) Options used to mount the storage backend file system where Cinder volumes are stored.
nas_password = (String) Password to connect to NAS system.
nas_private_key = (String) Filename of private key to use for SSH authentication.
nas_secure_file_operations = auto (String) Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto.
nas_secure_file_permissions = auto (String) Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto.
nas_share_path = (String) Path to the share to use for storing Cinder volumes. For example: “/srv/export1” for an NFS server export available at 10.0.5.10:/srv/export1 .
nas_ssh_port = 22 (Port number) SSH port to use to connect to NAS system.
Description of profiler configuration options
Configuration option = Default value Description
[profiler]  
connection_string = messaging://

(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging.

Examples of possible values:

  • messaging://: use oslo_messaging driver for sending notifications.
enabled = False

(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature).

Possible values:

  • True: Enables the feature
  • False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.
hmac_keys = SECRET_KEY

(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.

Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.

trace_sqlalchemy = False

(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced).

Possible values:

  • True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.
  • False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.
Description of Pure Storage driver configuration options
Configuration option = Default value Description
[DEFAULT]  
pure_api_token = None (String) REST API authorization token.
pure_automatic_max_oversubscription_ratio = True (Boolean) Automatically determine an oversubscription ratio based on the current total data reduction values. If used this calculated value will override the max_over_subscription_ratio config option.
pure_eradicate_on_delete = False (Boolean) When enabled, all Pure volumes, snapshots, and protection groups will be eradicated at the time of deletion in Cinder. Data will NOT be recoverable after a delete with this set to True! When disabled, volumes and snapshots will go into pending eradication state and can be recovered.
pure_replica_interval_default = 900 (Integer) Snapshot replication interval in seconds.
pure_replica_retention_long_term_default = 7 (Integer) Retain snapshots per day on target for this time (in days.)
pure_replica_retention_long_term_per_day_default = 3 (Integer) Retain how many snapshots for each day.
pure_replica_retention_short_term_default = 14400 (Integer) Retain all snapshots on target for this time (in seconds.)
Description of quota configuration options
Configuration option = Default value Description
[DEFAULT]  
max_age = 0 (Integer) Number of seconds between subsequent usage refreshes
quota_backup_gigabytes = 1000 (Integer) Total amount of storage, in gigabytes, allowed for backups per project
quota_backups = 10 (Integer) Number of volume backups allowed per project
quota_consistencygroups = 10 (Integer) Number of consistencygroups allowed per project
quota_driver = cinder.quota.DbQuotaDriver (String) Default driver to use for quota checks
quota_gigabytes = 1000 (Integer) Total amount of storage, in gigabytes, allowed for volumes and snapshots per project
quota_groups = 10 (Integer) Number of groups allowed per project
quota_snapshots = 10 (Integer) Number of volume snapshots allowed per project
quota_volumes = 10 (Integer) Number of volumes allowed per project
reservation_expire = 86400 (Integer) Number of seconds until a reservation expires
use_default_quota_class = True (Boolean) Enables or disables use of default quota class with default quota.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of SAN configuration options
Configuration option = Default value Description
[DEFAULT]  
san_clustername = (String) Cluster name to use for creating volumes
san_ip = (String) IP address of SAN controller
san_is_local = False (Boolean) Execute commands locally instead of over SSH; use if the volume service is running on the SAN device
san_login = admin (String) Username for SAN controller
san_password = (String) Password for SAN controller
san_private_key = (String) Filename of private key to use for SSH authentication
san_ssh_port = 22 (Port number) SSH port to use with SAN
san_thin_provision = True (Boolean) Use thin provisioning for SAN volumes?
ssh_conn_timeout = 30 (Integer) SSH connection timeout in seconds
ssh_max_pool_conn = 5 (Integer) Maximum ssh connections in the pool
ssh_min_pool_conn = 1 (Integer) Minimum ssh connections in the pool
Description of scheduler configuration options
Configuration option = Default value Description
[DEFAULT]  
filter_function = None (String) String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler.
goodness_function = None (String) String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler.
scheduler_default_filters = AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter (List) Which filter class names to use for filtering hosts when not specified in the request.
scheduler_default_weighers = CapacityWeigher (List) Which weigher class names to use for weighing hosts.
scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler (String) Default scheduler driver to use
scheduler_host_manager = cinder.scheduler.host_manager.HostManager (String) The scheduler host manager class to use
scheduler_json_config_location = (String) Absolute path to scheduler configuration JSON file.
scheduler_manager = cinder.scheduler.manager.SchedulerManager (String) Full class name for the Manager for scheduler
scheduler_max_attempts = 3 (Integer) Maximum number of attempts to schedule a volume
scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler (String) Which handler to use for selecting the host/pool after weighing
Description of SCST volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
scst_target_driver = iscsi (String) SCST target implementation can choose from multiple SCST target drivers.
scst_target_iqn_name = None (String) Certain ISCSI targets have predefined target names, SCST target driver uses this name.
Description of storage configuration options
Configuration option = Default value Description
[DEFAULT]  
allocated_capacity_weight_multiplier = -1.0 (Floating point) Multiplier used for weighing allocated capacity. Positive numbers mean to stack vs spread.
capacity_weight_multiplier = 1.0 (Floating point) Multiplier used for weighing free capacity. Negative numbers mean to stack vs spread.
enabled_backends = None (List) A list of backend names to use. These backend names should be backed by a unique [CONFIG] group with its options
iscsi_helper = tgtadm (String) iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target or fake for testing.
iscsi_iotype = fileio (String) Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device
iscsi_ip_address = $my_ip (String) The IP address that the iSCSI daemon is listening on
iscsi_port = 3260 (Port number) The port that the iSCSI daemon is listening on
iscsi_protocol = iscsi (String) Determines the iSCSI protocol for new iSCSI volumes, created with tgtadm or lioadm target helpers. In order to enable RDMA, this parameter should be set with the value “iser”. The supported iSCSI protocol values are “iscsi” and “iser”.
iscsi_target_flags = (String) Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool.
iscsi_target_prefix = iqn.2010-10.org.openstack: (String) Prefix for iSCSI volumes
iscsi_write_cache = on (String) Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if iscsi_helper is set to tgtadm.
iser_helper = tgtadm (String) The name of the iSER target user-land tool to use
iser_ip_address = $my_ip (String) The IP address that the iSER daemon is listening on
iser_port = 3260 (Port number) The port that the iSER daemon is listening on
iser_target_prefix = iqn.2010-10.org.openstack: (String) Prefix for iSER volumes
migration_create_volume_timeout_secs = 300 (Integer) Timeout for creating the volume to migrate to when performing volume migration (seconds)
num_iser_scan_tries = 3 (Integer) The maximum number of times to rescan iSER targetto find volume
num_volume_device_scan_tries = 3 (Integer) The maximum number of times to rescan targets to find volume
volume_backend_name = None (String) The backend name for a given driver implementation
volume_clear = zero (String) Method used to wipe old volumes
volume_clear_ionice = None (String) The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example “-c3” for idle only priority.
volume_clear_size = 0 (Integer) Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
volume_copy_blkio_cgroup_name = cinder-volume-copy (String) The blkio cgroup name to be used to limit bandwidth of volume copy
volume_copy_bps_limit = 0 (Integer) The upper limit of bandwidth of volume copy. 0 => unlimited
volume_dd_blocksize = 1M (String) The default block size used when copying/clearing volumes
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (String) Driver to use for volume creation
volume_manager = cinder.volume.manager.VolumeManager (String) Full class name for the Manager for volume
volume_service_inithost_offload = False (Boolean) Offload pending volume delete during volume service startup
volume_usage_audit_period = month (String) Time period for which to generate volume usages. The options are hour, day, month, or year.
volumes_dir = $state_path/volumes (String) Volume configuration file storage directory
Description of Tegile volume driver configuration options
Configuration option = Default value Description
[DEFAULT]  
tegile_default_pool = None (String) Create volumes in this pool
tegile_default_project = None (String) Create volumes in this project
Description of zones configuration options
Configuration option = Default value Description
[DEFAULT]  
cloned_volume_same_az = True (Boolean) Ensure that the new volumes are the same AZ as snapshot or source volume

Block Storage service sample configuration files

All the files in this section can be found in /etc/cinder.

cinder.conf

The cinder.conf file is installed in /etc/cinder by default. When you manually install the Block Storage service, the options in the cinder.conf file are set to default values.

The cinder.conf file contains most of the options needed to configure the Block Storage service. You can generate the latest configuration file by using the tox provided by the Block Storage service. Here is a sample configuration file:

[DEFAULT]

#
# From cinder
#

# Backup metadata version to be used when backing up volume metadata. If this
# number is bumped, make sure the service doing the restore supports the new
# version. (integer value)
#backup_metadata_version = 2

# The number of chunks or objects, for which one Ceilometer notification will
# be sent (integer value)
#backup_object_number_per_notification = 10

# Interval, in seconds, between two progress notifications reporting the backup
# status (integer value)
#backup_timer_interval = 120

# Name of this cluster.  Used to group volume hosts that share the same backend
# configurations to work in HA Active-Active mode.  Active-Active is not yet
# supported. (string value)
#cluster = <None>

# Management IP address of HNAS. This can be any IP in the admin address on
# HNAS or the SMU IP. (IP address value)
#hnas_mgmt_ip0 = <None>

# Command to communicate to HNAS. (string value)
#hnas_ssc_cmd = ssc

# HNAS username. (string value)
#hnas_username = <None>

# HNAS password. (string value)
#hnas_password = <None>

# Port to be used for SSH authentication. (port value)
# Minimum value: 0
# Maximum value: 65535
#hnas_ssh_port = 22

# Path to the SSH private key used to authenticate in HNAS SMU. (string value)
#hnas_ssh_private_key = <None>

# The IP of the HNAS cluster admin. Required only for HNAS multi-cluster
# setups. (string value)
#hnas_cluster_admin_ip0 = <None>

# Service 0 volume type (string value)
#hnas_svc0_volume_type = <None>

# Service 0 HDP (string value)
#hnas_svc0_hdp = <None>

# Service 1 volume type (string value)
#hnas_svc1_volume_type = <None>

# Service 1 HDP (string value)
#hnas_svc1_hdp = <None>

# Service 2 volume type (string value)
#hnas_svc2_volume_type = <None>

# Service 2 HDP (string value)
#hnas_svc2_hdp = <None>

# Service 3 volume type (string value)
#hnas_svc3_volume_type = <None>

# Service 3 HDP (string value)
#hnas_svc3_hdp = <None>

# The maximum number of items that a collection resource returns in a single
# response (integer value)
#osapi_max_limit = 1000

# Base URL that will be presented to users in links to the OpenStack Volume API
# (string value)
# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
#osapi_volume_base_URL = <None>

# Volume filter options which non-admin user could use to query volumes.
# Default values are: ['name', 'status', 'metadata', 'availability_zone'
# ,'bootable', 'group_id'] (list value)
#query_volume_filters = name,status,metadata,availability_zone,bootable,group_id

# Ceph configuration file to use. (string value)
#backup_ceph_conf = /etc/ceph/ceph.conf

# The Ceph user to connect with. Default here is to use the same user as for
# Cinder volumes. If not using cephx this should be set to None. (string value)
#backup_ceph_user = cinder

# The chunk size, in bytes, that a backup is broken into before transfer to the
# Ceph object store. (integer value)
#backup_ceph_chunk_size = 134217728

# The Ceph pool where volume backups are stored. (string value)
#backup_ceph_pool = backups

# RBD stripe unit to use when creating a backup image. (integer value)
#backup_ceph_stripe_unit = 0

# RBD stripe count to use when creating a backup image. (integer value)
#backup_ceph_stripe_count = 0

# If True, always discard excess bytes when restoring volumes i.e. pad with
# zeroes. (boolean value)
#restore_discard_excess_bytes = true

# File with the list of available smbfs shares. (string value)
#smbfs_shares_config = /etc/cinder/smbfs_shares

# The path of the automatically generated file containing information about
# volume disk space allocation. (string value)
#smbfs_allocation_info_file_path = $state_path/allocation_data

# Default format that will be used when creating volumes if no volume format is
# specified. (string value)
# Allowed values: raw, qcow2, vhd, vhdx
#smbfs_default_volume_format = qcow2

# Create volumes as sparsed files which take no space rather than regular files
# when using raw format, in which case volume creation takes lot of time.
# (boolean value)
#smbfs_sparsed_volumes = true

# Percent of ACTUAL usage of the underlying volume before no new volumes can be
# allocated to the volume destination. (floating point value)
#smbfs_used_ratio = 0.95

# This will compare the allocated to available space on the volume destination.
# If the ratio exceeds this number, the destination will no longer be valid.
# (floating point value)
#smbfs_oversub_ratio = 1.0

# Base dir containing mount points for smbfs shares. (string value)
#smbfs_mount_point_base = $state_path/mnt

# Mount options passed to the smbfs client. See mount.cifs man page for
# details. (string value)
#smbfs_mount_options = noperm,file_mode=0775,dir_mode=0775

# Compression algorithm (None to disable) (string value)
#backup_compression_algorithm = zlib

# Use thin provisioning for SAN volumes? (boolean value)
#san_thin_provision = true

# IP address of SAN controller (string value)
#san_ip =

# Username for SAN controller (string value)
#san_login = admin

# Password for SAN controller (string value)
#san_password =

# Filename of private key to use for SSH authentication (string value)
#san_private_key =

# Cluster name to use for creating volumes (string value)
#san_clustername =

# SSH port to use with SAN (port value)
# Minimum value: 0
# Maximum value: 65535
#san_ssh_port = 22

# Execute commands locally instead of over SSH; use if the volume service is
# running on the SAN device (boolean value)
#san_is_local = false

# SSH connection timeout in seconds (integer value)
#ssh_conn_timeout = 30

# Minimum ssh connections in the pool (integer value)
#ssh_min_pool_conn = 1

# Maximum ssh connections in the pool (integer value)
#ssh_max_pool_conn = 5

# DEPRECATED: Legacy configuration file for HNAS NFS Cinder plugin. This is not
# needed if you fill all configuration on cinder.conf (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#hds_hnas_nfs_config_file = /opt/hds/hnas/cinder_nfs_conf.xml

# Sets the value of TCP_KEEPALIVE (True/False) for each server socket. (boolean
# value)
#tcp_keepalive = true

# Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not
# supported on OS X. (integer value)
#tcp_keepalive_interval = <None>

# Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
# (integer value)
#tcp_keepalive_count = <None>

# Option to enable strict host key checking.  When set to "True" Cinder will
# only connect to systems with a host key present in the configured
# "ssh_hosts_key_file".  When set to "False" the host key will be saved upon
# first connection and used for subsequent connections.  Default=False (boolean
# value)
#strict_ssh_host_key_policy = false

# File containing SSH host keys for the systems with which Cinder needs to
# communicate.  OPTIONAL: Default=$state_path/ssh_known_hosts (string value)
#ssh_hosts_key_file = $state_path/ssh_known_hosts

# The storage family type used on the storage system; valid values are
# ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using
# clustered Data ONTAP, or eseries for using E-Series. (string value)
# Allowed values: ontap_7mode, ontap_cluster, eseries
#netapp_storage_family = ontap_cluster

# The storage protocol to be used on the data path with the storage system.
# (string value)
# Allowed values: iscsi, fc, nfs
#netapp_storage_protocol = <None>

# The hostname (or IP address) for the storage system or proxy server. (string
# value)
#netapp_server_hostname = <None>

# The TCP port to use for communication with the storage system or proxy
# server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for
# HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. (integer value)
#netapp_server_port = <None>

# The transport protocol used when communicating with the storage system or
# proxy server. (string value)
# Allowed values: http, https
#netapp_transport_type = http

# Administrative user account name used to access the storage system or proxy
# server. (string value)
#netapp_login = <None>

# Password for the administrative user account specified in the netapp_login
# option. (string value)
#netapp_password = <None>

# This option specifies the virtual storage server (Vserver) name on the
# storage cluster on which provisioning of block storage volumes should occur.
# (string value)
#netapp_vserver = <None>

# The vFiler unit on which provisioning of block storage volumes will be done.
# This option is only used by the driver when connecting to an instance with a
# storage family of Data ONTAP operating in 7-Mode. Only use this option when
# utilizing the MultiStore feature on the NetApp storage system. (string value)
#netapp_vfiler = <None>

# The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner.
# This option is only used by the driver when connecting to an instance with a
# storage family of Data ONTAP operating in 7-Mode, and it is required if the
# storage protocol selected is FC. (string value)
#netapp_partner_backend_name = <None>

# The quantity to be multiplied by the requested volume size to ensure enough
# space is available on the virtual storage server (Vserver) to fulfill the
# volume creation request.  Note: this option is deprecated and will be removed
# in favor of "reserved_percentage" in the Mitaka release. (floating point
# value)
#netapp_size_multiplier = 1.2

# This option determines if storage space is reserved for LUN allocation. If
# enabled, LUNs are thick provisioned. If space reservation is disabled,
# storage space is allocated on demand. (string value)
# Allowed values: enabled, disabled
#netapp_lun_space_reservation = enabled

# If the percentage of available space for an NFS share has dropped below the
# value specified by this option, the NFS image cache will be cleaned. (integer
# value)
#thres_avl_size_perc_start = 20

# When the percentage of available space on an NFS share has reached the
# percentage specified by this option, the driver will stop clearing files from
# the NFS image cache that have not been accessed in the last M minutes, where
# M is the value of the expiry_thres_minutes configuration option. (integer
# value)
#thres_avl_size_perc_stop = 60

# This option specifies the threshold for last access time for images in the
# NFS image cache. When a cache cleaning cycle begins, images in the cache that
# have not been accessed in the last M minutes, where M is the value of this
# parameter, will be deleted from the cache to create free space on the NFS
# share. (integer value)
#expiry_thres_minutes = 720

# This option is used to specify the path to the E-Series proxy application on
# a proxy server. The value is combined with the value of the
# netapp_transport_type, netapp_server_hostname, and netapp_server_port options
# to create the URL used by the driver to connect to the proxy application.
# (string value)
#netapp_webservice_path = /devmgr/v2

# This option is only utilized when the storage family is configured to
# eseries. This option is used to restrict provisioning to the specified
# controllers. Specify the value of this option to be a comma separated list of
# controller hostnames or IP addresses to be used for provisioning. (string
# value)
#netapp_controller_ips = <None>

# Password for the NetApp E-Series storage array. (string value)
#netapp_sa_password = <None>

# This option specifies whether the driver should allow operations that require
# multiple attachments to a volume. An example would be live migration of
# servers that have volumes attached. When enabled, this backend is limited to
# 256 total volumes in order to guarantee volumes can be accessed by more than
# one host. (boolean value)
#netapp_enable_multiattach = false

# This option specifies the path of the NetApp copy offload tool binary. Ensure
# that the binary has execute permissions set which allow the effective user of
# the cinder-volume process to execute the file. (string value)
#netapp_copyoffload_tool_path = <None>

# This option defines the type of operating system that will access a LUN
# exported from Data ONTAP; it is assigned to the LUN at the time it is
# created. (string value)
#netapp_lun_ostype = <None>

# This option defines the type of operating system for all initiators that can
# access a LUN. This information is used when mapping LUNs to individual hosts
# or groups of hosts. (string value)
# Deprecated group/name - [DEFAULT]/netapp_eseries_host_type
#netapp_host_type = <None>

# This option is used to restrict provisioning to the specified pools. Specify
# the value of this option to be a regular expression which will be applied to
# the names of objects from the storage backend which represent pools in
# Cinder. This option is only utilized when the storage protocol is configured
# to use iSCSI or FC. (string value)
# Deprecated group/name - [DEFAULT]/netapp_volume_list
# Deprecated group/name - [DEFAULT]/netapp_storage_pools
#netapp_pool_name_search_pattern = (.+)

# Multi opt of dictionaries to represent the aggregate mapping between source
# and destination back ends when using whole back end replication. For every
# source aggregate associated with a cinder pool (NetApp FlexVol), you would
# need to specify the destination aggregate on the replication target device. A
# replication target device is configured with the configuration option
# replication_device. Specify this option as many times as you have replication
# devices. Each entry takes the standard dict config form:
# netapp_replication_aggregate_map =
# backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
# (dict value)
#netapp_replication_aggregate_map = <None>

# The maximum time in seconds to wait for existing SnapMirror transfers to
# complete before aborting during a failover. (integer value)
# Minimum value: 0
#netapp_snapmirror_quiesce_timeout = 3600

# Configure CHAP authentication for iSCSI connections (Default: Enabled)
# (boolean value)
#storwize_svc_iscsi_chap_enabled = true

# Base dir containing mount point for gluster share. (string value)
#glusterfs_backup_mount_point = $state_path/backup_mount

# GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format.
# Eg: 1.2.3.4:backup_vol (string value)
#glusterfs_backup_share = <None>

# Rest Gateway IP or FQDN for Scaleio (string value)
#coprhd_scaleio_rest_gateway_host = None

# Rest Gateway Port for Scaleio (port value)
# Minimum value: 0
# Maximum value: 65535
#coprhd_scaleio_rest_gateway_port = 4984

# Username for Rest Gateway (string value)
#coprhd_scaleio_rest_server_username = <None>

# Rest Gateway Password (string value)
#coprhd_scaleio_rest_server_password = <None>

# verify server certificate (boolean value)
#scaleio_verify_server_certificate = false

# Server certificate path (string value)
#scaleio_server_certificate_path = <None>

# Volume prefix for the backup id when backing up to TSM (string value)
#backup_tsm_volume_prefix = backup

# TSM password for the running username (string value)
#backup_tsm_password = password

# Enable or Disable compression for backups (boolean value)
#backup_tsm_compression = true

# config file for cinder eternus_dx volume driver (string value)
#cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml

# Specifies the path of the GPFS directory where Block Storage volume and
# snapshot files are stored. (string value)
#gpfs_mount_point_base = <None>

# Specifies the path of the Image service repository in GPFS.  Leave undefined
# if not storing images in GPFS. (string value)
#gpfs_images_dir = <None>

# Specifies the type of image copy to be used.  Set this when the Image service
# repository also uses GPFS so that image files can be transferred efficiently
# from the Image service to the Block Storage service. There are two valid
# values: "copy" specifies that a full copy of the image is made;
# "copy_on_write" specifies that copy-on-write optimization strategy is used
# and unmodified blocks of the image file are shared efficiently. (string
# value)
# Allowed values: copy, copy_on_write, <None>
#gpfs_images_share_mode = <None>

# Specifies an upper limit on the number of indirections required to reach a
# specific block due to snapshots or clones.  A lengthy chain of copy-on-write
# snapshots or clones can have a negative impact on performance, but improves
# space utilization.  0 indicates unlimited clone depth. (integer value)
#gpfs_max_clone_depth = 0

# Specifies that volumes are created as sparse files which initially consume no
# space. If set to False, the volume is created as a fully allocated file, in
# which case, creation may take a significantly longer time. (boolean value)
#gpfs_sparse_volumes = true

# Specifies the storage pool that volumes are assigned to. By default, the
# system storage pool is used. (string value)
#gpfs_storage_pool = system

# Main controller IP. (IP address value)
#zteControllerIP0 = <None>

# Slave controller IP. (IP address value)
#zteControllerIP1 = <None>

# Local IP. (IP address value)
#zteLocalIP = <None>

# User name. (string value)
#zteUserName = <None>

# User password. (string value)
#zteUserPassword = <None>

# Virtual block size of pool. Unit : KB. Valid value :  4,  8, 16, 32, 64, 128,
# 256, 512.  (integer value)
#zteChunkSize = 4

# Cache readahead size. (integer value)
#zteAheadReadSize = 8

# Cache policy. 0, Write Back; 1, Write Through. (integer value)
#zteCachePolicy = 1

# SSD cache switch. 0, OFF; 1, ON. (integer value)
#zteSSDCacheSwitch = 1

# Pool name list. (list value)
#zteStoragePool =

# Pool volume allocated policy. 0, Auto; 1, High Performance Tier First; 2,
# Performance Tier First; 3, Capacity Tier First. (integer value)
#ztePoolVoAllocatedPolicy = 0

# Pool volume move policy.0, Auto; 1, Highest Available; 2, Lowest Available;
# 3, No Relocation. (integer value)
#ztePoolVolMovePolicy = 0

# Whether it is a thin volume. (integer value)
#ztePoolVolIsThin = False

# Pool volume init allocated Capacity.Unit : KB.  (integer value)
#ztePoolVolInitAllocatedCapacity = 0

# Pool volume alarm threshold. [0, 100] (integer value)
#ztePoolVolAlarmThreshold = 0

# Pool volume alarm stop allocated flag. (integer value)
#ztePoolVolAlarmStopAllocatedFlag = 0

# Global backend request timeout, in seconds. (integer value)
#violin_request_timeout = 300

# Storage pools to be used to setup dedup luns only.(Comma separated list)
# (list value)
#violin_dedup_only_pools =

# Storage pools capable of dedup and other luns.(Comma separated list) (list
# value)
#violin_dedup_capable_pools =

# Method of choosing a storage pool for a lun. (string value)
# Allowed values: random, largest, smallest
#violin_pool_allocation_method = random

# Target iSCSI addresses to use.(Comma separated list) (list value)
#violin_iscsi_target_ips =

# IP address of Nexenta SA (string value)
#nexenta_host =

# HTTP port to connect to Nexenta REST API server (integer value)
#nexenta_rest_port = 8080

# Use http or https for REST connection (default auto) (string value)
# Allowed values: http, https, auto
#nexenta_rest_protocol = auto

# User name to connect to Nexenta SA (string value)
#nexenta_user = admin

# Password to connect to Nexenta SA (string value)
#nexenta_password = nexenta

# Nexenta target portal port (integer value)
#nexenta_iscsi_target_portal_port = 3260

# SA Pool that holds all volumes (string value)
#nexenta_volume = cinder

# IQN prefix for iSCSI targets (string value)
#nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder-

# Prefix for iSCSI target groups on SA (string value)
#nexenta_target_group_prefix = cinder/

# Volume group for ns5 (string value)
#nexenta_volume_group = iscsi

# Compression value for new ZFS folders. (string value)
# Allowed values: on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8, gzip-9, lzjb, zle, lz4
#nexenta_dataset_compression = on

# Deduplication value for new ZFS folders. (string value)
# Allowed values: on, off, sha256, verify, sha256, verify
#nexenta_dataset_dedup = off

# Human-readable description for the folder. (string value)
#nexenta_dataset_description =

# Block size for datasets (integer value)
#nexenta_blocksize = 4096

# Block size for datasets (integer value)
#nexenta_ns5_blocksize = 32

# Enables or disables the creation of sparse datasets (boolean value)
#nexenta_sparse = false

# File with the list of available nfs shares (string value)
#nexenta_shares_config = /etc/cinder/nfs_shares

# Base directory that contains NFS share mount points (string value)
#nexenta_mount_point_base = $state_path/mnt

# Enables or disables the creation of volumes as sparsed files that take no
# space. If disabled (False), volume is created as a regular file, which takes
# a long time. (boolean value)
#nexenta_sparsed_volumes = true

# If set True cache NexentaStor appliance volroot option value. (boolean value)
#nexenta_nms_cache_volroot = true

# Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best
# compression. (integer value)
#nexenta_rrmgr_compression = 0

# TCP Buffer size in KiloBytes. (integer value)
#nexenta_rrmgr_tcp_buf_size = 4096

# Number of TCP connections. (integer value)
#nexenta_rrmgr_connections = 2

# NexentaEdge logical path of directory to store symbolic links to NBDs (string
# value)
#nexenta_nbd_symlinks_dir = /dev/disk/by-path

# IP address of NexentaEdge management REST API endpoint (string value)
#nexenta_rest_address =

# User name to connect to NexentaEdge (string value)
#nexenta_rest_user = admin

# Password to connect to NexentaEdge (string value)
#nexenta_rest_password = nexenta

# NexentaEdge logical path of bucket for LUNs (string value)
#nexenta_lun_container =

# NexentaEdge iSCSI service name (string value)
#nexenta_iscsi_service =

# NexentaEdge iSCSI Gateway client address for non-VIP service (string value)
#nexenta_client_address =

# NexentaEdge iSCSI LUN object chunk size (integer value)
#nexenta_chunksize = 32768

# Make exception message format errors fatal. (boolean value)
#fatal_exception_format_errors = false

# IP address of this host (string value)
#my_ip = 10.0.2.15

# A list of the URLs of glance API servers available to cinder
# ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to
# http. (list value)
#glance_api_servers = <None>

# Version of the glance API to use (integer value)
#glance_api_version = 1

# Number retries when downloading an image from glance (integer value)
# Minimum value: 0
#glance_num_retries = 0

# Allow to perform insecure SSL (https) requests to glance (https will be used
# but cert validation will not be performed). (boolean value)
#glance_api_insecure = false

# Enables or disables negotiation of SSL layer compression. In some cases
# disabling compression can improve data throughput, such as when high network
# bandwidth is available and you use compressed image formats like qcow2.
# (boolean value)
#glance_api_ssl_compression = false

# Location of ca certificates file to use for glance client requests. (string
# value)
#glance_ca_certificates_file = <None>

# http/https timeout value for glance operations. If no value (None) is
# supplied here, the glanceclient default value is used. (integer value)
#glance_request_timeout = <None>

# DEPRECATED: Deploy v1 of the Cinder API. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#enable_v1_api = true

# DEPRECATED: Deploy v2 of the Cinder API. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#enable_v2_api = true

# Deploy v3 of the Cinder API. (boolean value)
#enable_v3_api = true

# Enables or disables rate limit of the API. (boolean value)
#api_rate_limit = true

# Specify list of extensions to load when using osapi_volume_extension option
# with cinder.api.contrib.select_extensions (list value)
#osapi_volume_ext_list =

# osapi volume extension to load (multi valued)
#osapi_volume_extension = cinder.api.contrib.standard_extensions

# Full class name for the Manager for volume (string value)
#volume_manager = cinder.volume.manager.VolumeManager

# Full class name for the Manager for volume backup (string value)
#backup_manager = cinder.backup.manager.BackupManager

# Full class name for the Manager for scheduler (string value)
#scheduler_manager = cinder.scheduler.manager.SchedulerManager

# Name of this node.  This can be an opaque identifier. It is not necessarily a
# host name, FQDN, or IP address. (string value)
#host = openstack-VirtualBox

# Availability zone of this node (string value)
#storage_availability_zone = nova

# Default availability zone for new volumes. If not set, the
# storage_availability_zone option value is used as the default for new
# volumes. (string value)
#default_availability_zone = <None>

# If the requested Cinder availability zone is unavailable, fall back to the
# value of default_availability_zone, then storage_availability_zone, instead
# of failing. (boolean value)
#allow_availability_zone_fallback = false

# Default volume type to use (string value)
#default_volume_type = <None>

# Default group type to use (string value)
#default_group_type = <None>

# Time period for which to generate volume usages. The options are hour, day,
# month, or year. (string value)
#volume_usage_audit_period = month

# Path to the rootwrap configuration file to use for running commands as root
# (string value)
#rootwrap_config = /etc/cinder/rootwrap.conf

# Enable monkey patching (boolean value)
#monkey_patch = false

# List of modules/decorators to monkey patch (list value)
#monkey_patch_modules =

# Maximum time since last check-in for a service to be considered up (integer
# value)
#service_down_time = 60

# The full class name of the volume API class to use (string value)
#volume_api_class = cinder.volume.api.API

# The full class name of the volume backup API class (string value)
#backup_api_class = cinder.backup.api.API

# The strategy to use for auth. Supports noauth or keystone. (string value)
# Allowed values: noauth, keystone
#auth_strategy = keystone

# A list of backend names to use. These backend names should be backed by a
# unique [CONFIG] group with its options (list value)
#enabled_backends = <None>

# Whether snapshots count against gigabyte quota (boolean value)
#no_snapshot_gb_quota = false

# The full class name of the volume transfer API class (string value)
#transfer_api_class = cinder.transfer.api.API

# The full class name of the volume replication API class (string value)
#replication_api_class = cinder.replication.api.API

# The full class name of the consistencygroup API class (string value)
#consistencygroup_api_class = cinder.consistencygroup.api.API

# The full class name of the group API class (string value)
#group_api_class = cinder.group.api.API

# OpenStack privileged account username. Used for requests to other services
# (such as Nova) that require an account with special rights. (string value)
#os_privileged_user_name = <None>

# Password associated with the OpenStack privileged account. (string value)
#os_privileged_user_password = <None>

# Tenant name associated with the OpenStack privileged account. (string value)
#os_privileged_user_tenant = <None>

# Auth URL associated with the OpenStack privileged account. (string value)
#os_privileged_user_auth_url = <None>

# Multiplier used for weighing free capacity. Negative numbers mean to stack vs
# spread. (floating point value)
#capacity_weight_multiplier = 1.0

# Multiplier used for weighing allocated capacity. Positive numbers mean to
# stack vs spread. (floating point value)
#allocated_capacity_weight_multiplier = -1.0

# IP address of sheep daemon. (string value)
#sheepdog_store_address = 127.0.0.1

# Port of sheep daemon. (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000

# Max size for body of a request (integer value)
#osapi_max_request_body_size = 114688

# Set 512 byte emulation on volume creation;  (boolean value)
#sf_emulate_512 = true

# Allow tenants to specify QOS on create (boolean value)
#sf_allow_tenant_qos = false

# Create SolidFire accounts with this prefix. Any string can be used here, but
# the string "hostname" is special and will create a prefix using the cinder
# node hostname (previous default behavior).  The default is NO prefix. (string
# value)
#sf_account_prefix = <None>

# Create SolidFire volumes with this prefix. Volume names are of the form
# <sf_volume_prefix><cinder-volume-id>.  The default is to use a prefix of
# 'UUID-'. (string value)
#sf_volume_prefix = UUID-

# Account name on the SolidFire Cluster to use as owner of template/cache
# volumes (created if does not exist). (string value)
#sf_template_account_name = openstack-vtemplate

# Create an internal cache of copy of images when a bootable volume is created
# to eliminate fetch from glance and qemu-conversion on subsequent calls.
# (boolean value)
#sf_allow_template_caching = true

# Overrides default cluster SVIP with the one specified. This is required or
# deployments that have implemented the use of VLANs for iSCSI networks in
# their cloud. (string value)
#sf_svip = <None>

# Create an internal mapping of volume IDs and account.  Optimizes lookups and
# performance at the expense of memory, very large deployments may want to
# consider setting to False. (boolean value)
#sf_enable_volume_mapping = true

# SolidFire API port. Useful if the device api is behind a proxy on a different
# port. (port value)
# Minimum value: 0
# Maximum value: 65535
#sf_api_port = 443

# Utilize volume access groups on a per-tenant basis. (boolean value)
#sf_enable_vag = false

# Hostname for the CoprHD Instance (string value)
#coprhd_hostname = <None>

# Port for the CoprHD Instance (port value)
# Minimum value: 0
# Maximum value: 65535
#coprhd_port = 4443

# Username for accessing the CoprHD Instance (string value)
#coprhd_username = <None>

# Password for accessing the CoprHD Instance (string value)
#coprhd_password = <None>

# Tenant to utilize within the CoprHD Instance (string value)
#coprhd_tenant = <None>

# Project to utilize within the CoprHD Instance (string value)
#coprhd_project = <None>

# Virtual Array to utilize within the CoprHD Instance (string value)
#coprhd_varray = <None>

# True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
# (boolean value)
#coprhd_emulate_snapshot = false

# The URL of the Swift endpoint (string value)
#backup_swift_url = <None>

# The URL of the Keystone endpoint (string value)
#backup_swift_auth_url = <None>

# Info to match when looking for swift in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
# Only used if backup_swift_url is unset (string value)
#swift_catalog_info = object-store:swift:publicURL

# Info to match when looking for keystone in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
# Only used if backup_swift_auth_url is unset (string value)
#keystone_catalog_info = identity:Identity Service:publicURL

# Swift authentication mechanism (string value)
#backup_swift_auth = per_user

# Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0
# or "3" for auth 3.0 (string value)
#backup_swift_auth_version = 1

# Swift tenant/account name. Required when connecting to an auth 2.0 system
# (string value)
#backup_swift_tenant = <None>

# Swift user domain name. Required when connecting to an auth 3.0 system
# (string value)
#backup_swift_user_domain = <None>

# Swift project domain name. Required when connecting to an auth 3.0 system
# (string value)
#backup_swift_project_domain = <None>

# Swift project/account name. Required when connecting to an auth 3.0 system
# (string value)
#backup_swift_project = <None>

# Swift user name (string value)
#backup_swift_user = <None>

# Swift key for authentication (string value)
#backup_swift_key = <None>

# The default Swift container to use (string value)
#backup_swift_container = volumebackups

# The size in bytes of Swift backup objects (integer value)
#backup_swift_object_size = 52428800

# The size in bytes that changes are tracked for incremental backups.
# backup_swift_object_size has to be multiple of backup_swift_block_size.
# (integer value)
#backup_swift_block_size = 32768

# The number of retries to make for Swift operations (integer value)
#backup_swift_retry_attempts = 3

# The backoff time in seconds between Swift retries (integer value)
#backup_swift_retry_backoff = 2

# Enable or Disable the timer to send the periodic progress notifications to
# Ceilometer when backing up the volume to the Swift backend storage. The
# default value is True to enable the timer. (boolean value)
#backup_swift_enable_progress_timer = true

# Location of the CA certificate file to use for swift client requests. (string
# value)
#backup_swift_ca_cert_file = <None>

# Bypass verification of server certificate when making SSL connection to
# Swift. (boolean value)
#backup_swift_auth_insecure = false

# These values will be used for CloudByte storage's addQos API call. (dict
# value)
#cb_add_qosgroup = graceallowed:false,iops:10,iopscontrol:true,latency:15,memlimit:0,networkspeed:0,throughput:0,tpcontrol:false

# These values will be used for CloudByte storage's createVolume API call.
# (dict value)
#cb_create_volume = blocklength:512B,compression:off,deduplication:off,protocoltype:ISCSI,recordsize:16k,sync:always

# Driver will use this API key to authenticate against the CloudByte storage's
# management interface. (string value)
#cb_apikey = <None>

# CloudByte storage specific account name. This maps to a project name in
# OpenStack. (string value)
#cb_account_name = <None>

# This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte
# storage. A volume will be created in this TSM. (string value)
#cb_tsm_name = <None>

# A retry value in seconds. Will be used by the driver to check if volume
# creation was successful in CloudByte storage. (integer value)
#cb_confirm_volume_create_retry_interval = 5

# Will confirm a successful volume creation in CloudByte storage by making this
# many number of attempts. (integer value)
#cb_confirm_volume_create_retries = 3

# A retry value in seconds. Will be used by the driver to check if volume
# deletion was successful in CloudByte storage. (integer value)
#cb_confirm_volume_delete_retry_interval = 5

# Will confirm a successful volume deletion in CloudByte storage by making this
# many number of attempts. (integer value)
#cb_confirm_volume_delete_retries = 3

# This corresponds to the discovery authentication group in CloudByte storage.
# Chap users are added to this group. Driver uses the first user found for this
# group. Default value is None. (string value)
#cb_auth_group = <None>

# These values will be used for CloudByte storage's updateQosGroup API call.
# (list value)
#cb_update_qos_group = iops,latency,graceallowed

# These values will be used for CloudByte storage's updateFileSystem API call.
# (list value)
#cb_update_file_system = compression,sync,noofcopies,readonly

# Interval, in seconds, between nodes reporting state to datastore (integer
# value)
#report_interval = 10

# Interval, in seconds, between running periodic tasks (integer value)
#periodic_interval = 60

# Range, in seconds, to randomly delay when starting the periodic task
# scheduler to reduce stampeding. (Disable by setting to 0) (integer value)
#periodic_fuzzy_delay = 60

# IP address on which OpenStack Volume API listens (string value)
#osapi_volume_listen = 0.0.0.0

# Port on which OpenStack Volume API listens (port value)
# Minimum value: 0
# Maximum value: 65535
#osapi_volume_listen_port = 8776

# Number of workers for OpenStack Volume API service. The default is equal to
# the number of CPUs available. (integer value)
#osapi_volume_workers = <None>

# Wraps the socket in a SSL context if True is set. A certificate file and key
# file must be specified. (boolean value)
#osapi_volume_use_ssl = false

# The full class name of the compute API class to use (string value)
#compute_api_class = cinder.compute.nova.API

# Number of nodes that should replicate the data. (integer value)
#drbdmanage_redundancy = 1

# Resource deployment completion wait policy. (string value)
#drbdmanage_resource_policy = {"ratio": "0.51", "timeout": "60"}

# Disk options to set on new resources. See http://www.drbd.org/en/doc/users-
# guide-90/re-drbdconf for all the details. (string value)
#drbdmanage_disk_options = {"c-min-rate": "4M"}

# Net options to set on new resources. See http://www.drbd.org/en/doc/users-
# guide-90/re-drbdconf for all the details. (string value)
#drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"}

# Resource options to set on new resources. See http://www.drbd.org/en/doc
# /users-guide-90/re-drbdconf for all the details. (string value)
#drbdmanage_resource_options = {"auto-promote-timeout": "300"}

# Snapshot completion wait policy. (string value)
#drbdmanage_snapshot_policy = {"count": "1", "timeout": "60"}

# Volume resize completion wait policy. (string value)
#drbdmanage_resize_policy = {"timeout": "60"}

# Resource deployment completion wait plugin. (string value)
#drbdmanage_resource_plugin = drbdmanage.plugins.plugins.wait_for.WaitForResource

# Snapshot completion wait plugin. (string value)
#drbdmanage_snapshot_plugin = drbdmanage.plugins.plugins.wait_for.WaitForSnapshot

# Volume resize completion wait plugin. (string value)
#drbdmanage_resize_plugin = drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize

# If set, the c-vol node will receive a useable
#                 /dev/drbdX device, even if the actual data is stored on
#                 other nodes only.
#                 This is useful for debugging, maintenance, and to be
#                 able to do the iSCSI export from the c-vol node. (boolean
# value)
#drbdmanage_devs_on_controller = true

# Pool or Vdisk name to use for volume creation. (string value)
#dothill_backend_name = A

# linear (for Vdisk) or virtual (for Pool). (string value)
# Allowed values: linear, virtual
#dothill_backend_type = virtual

# DotHill API interface protocol. (string value)
# Allowed values: http, https
#dothill_api_protocol = https

# Whether to verify DotHill array SSL certificate. (boolean value)
#dothill_verify_certificate = false

# DotHill array SSL certificate path. (string value)
#dothill_verify_certificate_path = <None>

# List of comma-separated target iSCSI IP addresses. (list value)
#dothill_iscsi_ips =

# File with the list of available gluster shares (string value)
#glusterfs_shares_config = /etc/cinder/glusterfs_shares

# Base dir containing mount points for gluster shares. (string value)
#glusterfs_mount_point_base = $state_path/mnt

# REST API authorization token. (string value)
#pure_api_token = <None>

# Automatically determine an oversubscription ratio based on the current total
# data reduction values. If used this calculated value will override the
# max_over_subscription_ratio config option. (boolean value)
#pure_automatic_max_oversubscription_ratio = true

# Snapshot replication interval in seconds. (integer value)
#pure_replica_interval_default = 900

# Retain all snapshots on target for this time (in seconds.) (integer value)
#pure_replica_retention_short_term_default = 14400

# Retain how many snapshots for each day. (integer value)
#pure_replica_retention_long_term_per_day_default = 3

# Retain snapshots per day on target for this time (in days.) (integer value)
#pure_replica_retention_long_term_default = 7

# When enabled, all Pure volumes, snapshots, and protection groups will be
# eradicated at the time of deletion in Cinder. Data will NOT be recoverable
# after a delete with this set to True! When disabled, volumes and snapshots
# will go into pending eradication state and can be recovered. (boolean value)
#pure_eradicate_on_delete = false

# ID of the project which will be used as the Cinder internal tenant. (string
# value)
#cinder_internal_tenant_project_id = <None>

# ID of the user to be used in volume operations as the Cinder internal tenant.
# (string value)
#cinder_internal_tenant_user_id = <None>

# The scheduler host manager class to use (string value)
#scheduler_host_manager = cinder.scheduler.host_manager.HostManager

# Maximum number of attempts to schedule a volume (integer value)
#scheduler_max_attempts = 3

# Proxy driver that connects to the IBM Storage Array (string value)
#proxy = storage.proxy.IBMStorageProxy

# Connection type to the IBM Storage Array (string value)
# Allowed values: fibre_channel, iscsi
#connection_type = iscsi

# CHAP authentication mode, effective only for iscsi (disabled|enabled) (string
# value)
# Allowed values: disabled, enabled
#chap = disabled

# List of Management IP addresses (separated by commas) (string value)
#management_ips =

# IP address for connecting to VMware vCenter server. (string value)
#vmware_host_ip = <None>

# Port number for connecting to VMware vCenter server. (port value)
# Minimum value: 0
# Maximum value: 65535
#vmware_host_port = 443

# Username for authenticating with VMware vCenter server. (string value)
#vmware_host_username = <None>

# Password for authenticating with VMware vCenter server. (string value)
#vmware_host_password = <None>

# Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl.
# Optional over-ride to default location for bug work-arounds. (string value)
#vmware_wsdl_location = <None>

# Number of times VMware vCenter server API must be retried upon connection
# related issues. (integer value)
#vmware_api_retry_count = 10

# The interval (in seconds) for polling remote tasks invoked on VMware vCenter
# server. (floating point value)
#vmware_task_poll_interval = 2.0

# Name of the vCenter inventory folder that will contain Cinder volumes. This
# folder will be created under "OpenStack/<project_folder>", where
# project_folder is of format "Project (<volume_project_id>)". (string value)
#vmware_volume_folder = Volumes

# Timeout in seconds for VMDK volume transfer between Cinder and Glance.
# (integer value)
#vmware_image_transfer_timeout_secs = 7200

# Max number of objects to be retrieved per batch. Query results will be
# obtained in batches from the server and not in one shot. Server may still
# limit the count to something less than the configured value. (integer value)
#vmware_max_objects_retrieval = 100

# Optional string specifying the VMware vCenter server version. The driver
# attempts to retrieve the version from VMware vCenter server. Set this
# configuration only if you want to override the vCenter server version.
# (string value)
#vmware_host_version = <None>

# Directory where virtual disks are stored during volume backup and restore.
# (string value)
#vmware_tmp_dir = /tmp

# CA bundle file to use in verifying the vCenter server certificate. (string
# value)
#vmware_ca_file = <None>

# If true, the vCenter server certificate is not verified. If false, then the
# default CA truststore is used for verification. This option is ignored if
# "vmware_ca_file" is set. (boolean value)
#vmware_insecure = false

# Name of a vCenter compute cluster where volumes should be created. (multi
# valued)
#vmware_cluster_name =

# Pool or Vdisk name to use for volume creation. (string value)
#lenovo_backend_name = A

# linear (for VDisk) or virtual (for Pool). (string value)
# Allowed values: linear, virtual
#lenovo_backend_type = virtual

# Lenovo api interface protocol. (string value)
# Allowed values: http, https
#lenovo_api_protocol = https

# Whether to verify Lenovo array SSL certificate. (boolean value)
#lenovo_verify_certificate = false

# Lenovo array SSL certificate path. (string value)
#lenovo_verify_certificate_path = <None>

# List of comma-separated target iSCSI IP addresses. (list value)
#lenovo_iscsi_ips =

# The maximum size in bytes of the files used to hold backups. If the volume
# being backed up exceeds this size, then it will be backed up into multiple
# files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
# (integer value)
#backup_file_size = 1999994880

# The size in bytes that changes are tracked for incremental backups.
# backup_file_size has to be multiple of backup_sha_block_size_bytes. (integer
# value)
#backup_sha_block_size_bytes = 32768

# Enable or Disable the timer to send the periodic progress notifications to
# Ceilometer when backing up the volume to the backend storage. The default
# value is True to enable the timer. (boolean value)
#backup_enable_progress_timer = true

# Path specifying where to store backups. (string value)
#backup_posix_path = $state_path/backup

# Custom directory to use for backups. (string value)
#backup_container = <None>

# REST server port. (string value)
#sio_rest_server_port = 443

# Verify server certificate. (boolean value)
#sio_verify_server_certificate = false

# Server certificate path. (string value)
#sio_server_certificate_path = <None>

# Round up volume capacity. (boolean value)
#sio_round_volume_capacity = true

# Unmap volume before deletion. (boolean value)
#sio_unmap_volume_before_deletion = false

# Protection Domain ID. (string value)
#sio_protection_domain_id = <None>

# Protection Domain name. (string value)
#sio_protection_domain_name = <None>

# Storage Pools. (string value)
#sio_storage_pools = <None>

# Storage Pool name. (string value)
#sio_storage_pool_name = <None>

# Storage Pool ID. (string value)
#sio_storage_pool_id = <None>

# max_over_subscription_ratio setting for the ScaleIO driver. This replaces the
# general max_over_subscription_ratio which has no effect in this
# driver.Maximum value allowed for ScaleIO is 10.0. (floating point value)
#sio_max_over_subscription_ratio = 10.0

# Driver to use for database access (string value)
#db_driver = cinder.db

# Group name to use for creating volumes. Defaults to "group-0". (string value)
#eqlx_group_name = group-0

# Timeout for the Group Manager cli command execution. Default is 30. Note that
# this option is deprecated in favour of "ssh_conn_timeout" as specified in
# cinder/volume/drivers/san/san.py and will be removed in M release. (integer
# value)
#eqlx_cli_timeout = 30

# Maximum retry count for reconnection. Default is 5. (integer value)
# Minimum value: 0
#eqlx_cli_max_retries = 5

# Use CHAP authentication for targets. Note that this option is deprecated in
# favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be
# removed in next release. (boolean value)
#eqlx_use_chap = false

# Existing CHAP account name. Note that this option is deprecated in favour of
# "chap_username" as specified in cinder/volume/driver.py and will be removed
# in next release. (string value)
#eqlx_chap_login = admin

# Password for specified CHAP account name. Note that this option is deprecated
# in favour of "chap_password" as specified in cinder/volume/driver.py and will
# be removed in the next release (string value)
#eqlx_chap_password = password

# Pool in which volumes will be created. Defaults to "default". (string value)
#eqlx_pool = default

# The number of characters in the salt. (integer value)
#volume_transfer_salt_length = 8

# The number of characters in the autogenerated auth key. (integer value)
#volume_transfer_key_length = 16

# Services to be added to the available pool on create (boolean value)
#enable_new_services = true

# Template string to be used to generate volume names (string value)
#volume_name_template = volume-%s

# Template string to be used to generate snapshot names (string value)
#snapshot_name_template = snapshot-%s

# Template string to be used to generate backup names (string value)
#backup_name_template = backup-%s

# Multiplier used for weighing volume number. Negative numbers mean to spread
# vs stack. (floating point value)
#volume_number_multiplier = -1.0

# RPC port to connect to Coho Data MicroArray (integer value)
#coho_rpc_port = 2049

# Path or URL to Scality SOFS configuration file (string value)
#scality_sofs_config = <None>

# Base dir where Scality SOFS shall be mounted (string value)
#scality_sofs_mount_point = $state_path/scality

# Path from Scality SOFS root to volume dir (string value)
#scality_sofs_volume_dir = cinder/volumes

# Default storage pool for volumes. (integer value)
#ise_storage_pool = 1

# Raid level for ISE volumes. (integer value)
#ise_raid = 1

# Number of retries (per port) when establishing connection to ISE management
# port. (integer value)
#ise_connection_retries = 5

# Interval (secs) between retries. (integer value)
#ise_retry_interval = 1

# Number on retries to get completion status after issuing a command to ISE.
# (integer value)
#ise_completion_retries = 30

# Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
# (boolean value)
#storwize_svc_multipath_enabled = false

# FSS pool id in which FalconStor volumes are stored. (integer value)
#fss_pool =

# Enable HTTP debugging to FSS (boolean value)
#fss_debug = false

# FSS additional retry list, separate by ; (string value)
#additional_retry_list =

# Storage pool name. (string value)
#zfssa_pool = <None>

# Project name. (string value)
#zfssa_project = <None>

# Block size. (string value)
# Allowed values: 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k
#zfssa_lun_volblocksize = 8k

# Flag to enable sparse (thin-provisioned): True, False. (boolean value)
#zfssa_lun_sparse = false

# Data compression. (string value)
# Allowed values: off, lzjb, gzip-2, gzip, gzip-9
#zfssa_lun_compression = off

# Synchronous write bias. (string value)
# Allowed values: latency, throughput
#zfssa_lun_logbias = latency

# iSCSI initiator group. (string value)
#zfssa_initiator_group =

# iSCSI initiator IQNs. (comma separated) (string value)
#zfssa_initiator =

# iSCSI initiator CHAP user (name). (string value)
#zfssa_initiator_user =

# Secret of the iSCSI initiator CHAP user. (string value)
#zfssa_initiator_password =

# iSCSI initiators configuration. (string value)
#zfssa_initiator_config =

# iSCSI target group name. (string value)
#zfssa_target_group = tgt-grp

# iSCSI target CHAP user (name). (string value)
#zfssa_target_user =

# Secret of the iSCSI target CHAP user. (string value)
#zfssa_target_password =

# iSCSI target portal (Data-IP:Port, w.x.y.z:3260). (string value)
#zfssa_target_portal = <None>

# Network interfaces of iSCSI targets. (comma separated) (string value)
#zfssa_target_interfaces = <None>

# REST connection timeout. (seconds) (integer value)
#zfssa_rest_timeout = <None>

# IP address used for replication data. (maybe the same as data ip) (string
# value)
#zfssa_replication_ip =

# Flag to enable local caching: True, False. (boolean value)
#zfssa_enable_local_cache = true

# Name of ZFSSA project where cache volumes are stored. (string value)
#zfssa_cache_project = os-cinder-cache

# Driver policy for volume manage. (string value)
# Allowed values: loose, strict
#zfssa_manage_policy = loose

# Number of times to attempt to run flakey shell commands (integer value)
#num_shell_tries = 3

# The percentage of backend capacity is reserved (integer value)
# Minimum value: 0
# Maximum value: 100
#reserved_percentage = 0

# Prefix for iSCSI volumes (string value)
#iscsi_target_prefix = iqn.2010-10.org.openstack:

# The IP address that the iSCSI daemon is listening on (string value)
#iscsi_ip_address = $my_ip

# The list of secondary IP addresses of the iSCSI daemon (list value)
#iscsi_secondary_ip_addresses =

# The port that the iSCSI daemon is listening on (port value)
# Minimum value: 0
# Maximum value: 65535
#iscsi_port = 3260

# The maximum number of times to rescan targets to find volume (integer value)
#num_volume_device_scan_tries = 3

# The backend name for a given driver implementation (string value)
#volume_backend_name = <None>

# Do we attach/detach volumes in cinder using multipath for volume to image and
# image to volume transfers? (boolean value)
#use_multipath_for_image_xfer = false

# If this is set to True, attachment of volumes for image transfer will be
# aborted when multipathd is not running. Otherwise, it will fallback to single
# path. (boolean value)
#enforce_multipath_for_image_xfer = false

# Method used to wipe old volumes (string value)
# Allowed values: none, zero, shred
#volume_clear = zero

# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
# (integer value)
# Maximum value: 1024
#volume_clear_size = 0

# The flag to pass to ionice to alter the i/o priority of the process used to
# zero a volume after deletion, for example "-c3" for idle only priority.
# (string value)
#volume_clear_ionice = <None>

# iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO
# iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
# Target, iscsictl for Chelsio iSCSI Target or fake for testing. (string value)
# Allowed values: tgtadm, lioadm, scstadmin, iscsictl, ietadm, fake
#iscsi_helper = tgtadm

# Volume configuration file storage directory (string value)
#volumes_dir = $state_path/volumes

# IET configuration file (string value)
#iet_conf = /etc/iet/ietd.conf

# Chiscsi (CXT) global defaults configuration file (string value)
#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf

# Sets the behavior of the iSCSI target to either perform blockio or fileio
# optionally, auto can be set and Cinder will autodetect type of backing device
# (string value)
# Allowed values: blockio, fileio, auto
#iscsi_iotype = fileio

# The default block size used when copying/clearing volumes (string value)
#volume_dd_blocksize = 1M

# The blkio cgroup name to be used to limit bandwidth of volume copy (string
# value)
#volume_copy_blkio_cgroup_name = cinder-volume-copy

# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
#volume_copy_bps_limit = 0

# Sets the behavior of the iSCSI target to either perform write-back(on) or
# write-through(off). This parameter is valid if iscsi_helper is set to tgtadm.
# (string value)
# Allowed values: on, off
#iscsi_write_cache = on

# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
# specify backing device flags using bsoflags option. The specified string is
# passed as is to the underlying tool. (string value)
#iscsi_target_flags =

# Determines the iSCSI protocol for new iSCSI volumes, created with tgtadm or
# lioadm target helpers. In order to enable RDMA, this parameter should be set
# with the value "iser". The supported iSCSI protocol values are "iscsi" and
# "iser". (string value)
# Allowed values: iscsi, iser
#iscsi_protocol = iscsi

# The path to the client certificate key for verification, if the driver
# supports it. (string value)
#driver_client_cert_key = <None>

# The path to the client certificate for verification, if the driver supports
# it. (string value)
#driver_client_cert = <None>

# Tell driver to use SSL for connection to backend storage if the driver
# supports it. (boolean value)
#driver_use_ssl = false

# Float representation of the over subscription ratio when thin provisioning is
# involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times
# of the total physical capacity. If the ratio is 10.5, it means provisioned
# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
# means provisioned capacity cannot exceed the total physical capacity. The
# ratio has to be a minimum of 1.0. (floating point value)
#max_over_subscription_ratio = 20.0

# Certain ISCSI targets have predefined target names, SCST target driver uses
# this name. (string value)
#scst_target_iqn_name = <None>

# SCST target implementation can choose from multiple SCST target drivers.
# (string value)
#scst_target_driver = iscsi

# Option to enable/disable CHAP authentication for targets. (boolean value)
# Deprecated group/name - [DEFAULT]/eqlx_use_chap
#use_chap_auth = false

# CHAP user name. (string value)
# Deprecated group/name - [DEFAULT]/eqlx_chap_login
#chap_username =

# Password for specified CHAP account name. (string value)
# Deprecated group/name - [DEFAULT]/eqlx_chap_password
#chap_password =

# Namespace for driver private data values to be saved in. (string value)
#driver_data_namespace = <None>

# String representation for an equation that will be used to filter hosts. Only
# used when the driver filter is set to be used by the Cinder scheduler.
# (string value)
#filter_function = <None>

# String representation for an equation that will be used to determine the
# goodness of a host. Only used when using the goodness weigher is set to be
# used by the Cinder scheduler. (string value)
#goodness_function = <None>

# If set to True the http client will validate the SSL certificate of the
# backend endpoint. (boolean value)
#driver_ssl_cert_verify = false

# Can be used to specify a non default path to a CA_BUNDLE file or directory
# with certificates of trusted CAs, which will be used to validate the backend
# (string value)
#driver_ssl_cert_path = <None>

# List of options that control which trace info is written to the DEBUG log
# level to assist developers. Valid values are method and api. (list value)
#trace_flags = <None>

# Multi opt of dictionaries to represent a replication target device.  This
# option may be specified multiple times in a single config section to specify
# multiple replication target devices.  Each entry takes the standard dict
# config form: replication_device =
# target_device_id:<required>,key1:value1,key2:value2... (dict value)
#replication_device = <None>

# If set to True, upload-to-image in raw format will create a cloned volume and
# register its location to the image service, instead of uploading the volume
# content. The cinder backend and locations support must be enabled in the
# image service, and glance_api_version must be set to 2. (boolean value)
#image_upload_use_cinder_backend = false

# If set to True, the image volume created by upload-to-image will be placed in
# the internal tenant. Otherwise, the image volume is created in the current
# context's tenant. (boolean value)
#image_upload_use_internal_tenant = false

# Enable the image volume cache for this backend. (boolean value)
#image_volume_cache_enabled = false

# Max size of the image volume cache for this backend in GB. 0 => unlimited.
# (integer value)
#image_volume_cache_max_size_gb = 0

# Max number of entries allowed in the image volume cache. 0 => unlimited.
# (integer value)
#image_volume_cache_max_count = 0

# Report to clients of Cinder that the backend supports discard (aka.
# trim/unmap). This will not actually change the behavior of the backend or the
# client directly, it will only notify that it can be used. (boolean value)
#report_discard_supported = false

# Protocol for transferring data between host and storage back-end. (string
# value)
# Allowed values: iscsi, fc
#storage_protocol = iscsi

# If this is set to True, the backup_use_temp_snapshot path will be used during
# the backup. Otherwise, it will use backup_use_temp_volume path. (boolean
# value)
#backup_use_temp_snapshot = false

# Set this to True when you want to allow an unsupported driver to start.
# Drivers that haven't maintained a working CI system and testing are marked as
# unsupported until CI is working again.  This also marks a driver as
# deprecated and may be removed in the next release. (boolean value)
#enable_unsupported_driver = false

# The maximum number of times to rescan iSER targetto find volume (integer
# value)
#num_iser_scan_tries = 3

# Prefix for iSER volumes (string value)
#iser_target_prefix = iqn.2010-10.org.openstack:

# The IP address that the iSER daemon is listening on (string value)
#iser_ip_address = $my_ip

# The port that the iSER daemon is listening on (port value)
# Minimum value: 0
# Maximum value: 65535
#iser_port = 3260

# The name of the iSER target user-land tool to use (string value)
#iser_helper = tgtadm

# Public url to use for versions endpoint. The default is None, which will use
# the request's host_url attribute to populate the URL base. If Cinder is
# operating behind a proxy, you will want to change this to represent the
# proxy's URL. (string value)
#public_endpoint = <None>

# Nimble Controller pool name (string value)
#nimble_pool_name = default

# Nimble Subnet Label (string value)
#nimble_subnet_label = *

# Path to store VHD backed volumes (string value)
#windows_iscsi_lun_path = C:\iSCSIVirtualDisks

# VNX authentication scope type. By default, the value is global. (string
# value)
#storage_vnx_authentication_type = global

# Directory path that contains the VNX security file. Make sure the security
# file is generated first. (string value)
#storage_vnx_security_file_dir = <None>

# Naviseccli Path. (string value)
#naviseccli_path = <None>

# Comma-separated list of storage pool names to be used. (list value)
#storage_vnx_pool_names = <None>

# Default timeout for CLI operations in minutes. For example, LUN migration is
# a typical long running operation, which depends on the LUN size and the load
# of the array. An upper bound in the specific deployment can be set to avoid
# unnecessary long wait. By default, it is 365 days long. (integer value)
#default_timeout = 31536000

# Default max number of LUNs in a storage group. By default, the value is 255.
# (integer value)
#max_luns_per_storage_group = 255

# To destroy storage group when the last LUN is removed from it. By default,
# the value is False. (boolean value)
#destroy_empty_storage_group = false

# Mapping between hostname and its iSCSI initiator IP addresses. (string value)
#iscsi_initiators = <None>

# Comma separated iSCSI or FC ports to be used in Nova or Cinder. (list value)
#io_port_list = <None>

# Automatically register initiators. By default, the value is False. (boolean
# value)
#initiator_auto_registration = false

# Automatically deregister initiators after the related storage group is
# destroyed. By default, the value is False. (boolean value)
#initiator_auto_deregistration = false

# Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is
# reached. By default, the value is False. (boolean value)
#check_max_pool_luns_threshold = false

# Delete a LUN even if it is in Storage Groups. By default, the value is False.
# (boolean value)
#force_delete_lun_in_storagegroup = false

# Force LUN creation even if the full threshold of pool is reached. By default,
# the value is False. (boolean value)
#ignore_pool_full_threshold = false

# Pool or Vdisk name to use for volume creation. (string value)
#hpmsa_backend_name = A

# linear (for Vdisk) or virtual (for Pool). (string value)
# Allowed values: linear, virtual
#hpmsa_backend_type = virtual

# HPMSA API interface protocol. (string value)
# Allowed values: http, https
#hpmsa_api_protocol = https

# Whether to verify HPMSA array SSL certificate. (boolean value)
#hpmsa_verify_certificate = false

# HPMSA array SSL certificate path. (string value)
#hpmsa_verify_certificate_path = <None>

# List of comma-separated target iSCSI IP addresses. (list value)
#hpmsa_iscsi_ips =

# A list of url schemes that can be downloaded directly via the direct_url.
# Currently supported schemes: [file]. (list value)
#allowed_direct_url_schemes =

# Info to match when looking for glance in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
# Only used if glance_api_servers are not provided. (string value)
#glance_catalog_info = image:glance:publicURL

# Default core properties of image (list value)
#glance_core_properties = checksum,container_format,disk_format,image_name,image_id,min_disk,min_ram,name,size

# HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos (string
# value)
# Deprecated group/name - [DEFAULT]/hplefthand_api_url
#hpelefthand_api_url = <None>

# HPE LeftHand Super user username (string value)
# Deprecated group/name - [DEFAULT]/hplefthand_username
#hpelefthand_username = <None>

# HPE LeftHand Super user password (string value)
# Deprecated group/name - [DEFAULT]/hplefthand_password
#hpelefthand_password = <None>

# HPE LeftHand cluster name (string value)
# Deprecated group/name - [DEFAULT]/hplefthand_clustername
#hpelefthand_clustername = <None>

# Configure CHAP authentication for iSCSI connections (Default: Disabled)
# (boolean value)
# Deprecated group/name - [DEFAULT]/hplefthand_iscsi_chap_enabled
#hpelefthand_iscsi_chap_enabled = false

# Enable HTTP debugging to LeftHand (boolean value)
# Deprecated group/name - [DEFAULT]/hplefthand_debug
#hpelefthand_debug = false

# Port number of SSH service. (port value)
# Minimum value: 0
# Maximum value: 65535
#hpelefthand_ssh_port = 16022

# Name for the VG that will contain exported volumes (string value)
#volume_group = cinder-volumes

# If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors
# + 2 PVs with available space (integer value)
#lvm_mirrors = 0

# Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to
# thin if thin is supported. (string value)
# Allowed values: default, thin, auto
#lvm_type = default

# LVM conf file to use for the LVM driver in Cinder; this setting is ignored if
# the specified file does not exist (You can also specify 'None' to not use a
# conf file even if one exists). (string value)
#lvm_conf_file = /etc/cinder/lvm.conf

# max_over_subscription_ratio setting for the LVM driver.  If set, this takes
# precedence over the general max_over_subscription_ratio option.  If None, the
# general option is used. (floating point value)
#lvm_max_over_subscription_ratio = 1.0

# use this file for cinder emc plugin config data (string value)
#cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml

# IP address or Hostname of NAS system. (string value)
# Deprecated group/name - [DEFAULT]/nas_ip
#nas_host =

# User name to connect to NAS system. (string value)
#nas_login = admin

# Password to connect to NAS system. (string value)
#nas_password =

# SSH port to use to connect to NAS system. (port value)
# Minimum value: 0
# Maximum value: 65535
#nas_ssh_port = 22

# Filename of private key to use for SSH authentication. (string value)
#nas_private_key =

# Allow network-attached storage systems to operate in a secure environment
# where root level access is not permitted. If set to False, access is as the
# root user and insecure. If set to True, access is not as root. If set to
# auto, a check is done to determine if this is a new installation: True is
# used if so, otherwise False. Default is auto. (string value)
#nas_secure_file_operations = auto

# Set more secure file permissions on network-attached storage volume files to
# restrict broad other/world access. If set to False, volumes are created with
# open permissions. If set to True, volumes are created with permissions for
# the cinder user and group (660). If set to auto, a check is done to determine
# if this is a new installation: True is used if so, otherwise False. Default
# is auto. (string value)
#nas_secure_file_permissions = auto

# Path to the share to use for storing Cinder volumes. For example:
# "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 .
# (string value)
#nas_share_path =

# Options used to mount the storage backend file system where Cinder volumes
# are stored. (string value)
#nas_mount_options = <None>

# Provisioning type that will be used when creating volumes. (string value)
# Allowed values: thin, thick
# Deprecated group/name - [DEFAULT]/glusterfs_sparsed_volumes
# Deprecated group/name - [DEFAULT]/glusterfs_qcow2_volumes
#nas_volume_prov_type = thin

# XMS cluster id in multi-cluster environment (string value)
#xtremio_cluster_name =

# Number of retries in case array is busy (integer value)
#xtremio_array_busy_retry_count = 5

# Interval between retries in case array is busy (integer value)
#xtremio_array_busy_retry_interval = 5

# Number of volumes created from each cached glance image (integer value)
#xtremio_volumes_per_glance_cache = 100

# The GCS bucket to use. (string value)
#backup_gcs_bucket = <None>

# The size in bytes of GCS backup objects. (integer value)
#backup_gcs_object_size = 52428800

# The size in bytes that changes are tracked for incremental backups.
# backup_gcs_object_size has to be multiple of backup_gcs_block_size. (integer
# value)
#backup_gcs_block_size = 32768

# GCS object will be downloaded in chunks of bytes. (integer value)
#backup_gcs_reader_chunk_size = 2097152

# GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the
# file is to be uploaded as a single chunk. (integer value)
#backup_gcs_writer_chunk_size = 2097152

# Number of times to retry. (integer value)
#backup_gcs_num_retries = 3

# List of GCS error codes. (list value)
#backup_gcs_retry_error_codes = 429

# Location of GCS bucket. (string value)
#backup_gcs_bucket_location = US

# Storage class of GCS bucket. (string value)
#backup_gcs_storage_class = NEARLINE

# Absolute path of GCS service account credential file. (string value)
#backup_gcs_credential_file = <None>

# Owner project id for GCS bucket. (string value)
#backup_gcs_project_id = <None>

# Http user-agent string for gcs api. (string value)
#backup_gcs_user_agent = gcscinder

# Enable or Disable the timer to send the periodic progress notifications to
# Ceilometer when backing up the volume to the GCS backend storage. The default
# value is True to enable the timer. (boolean value)
#backup_gcs_enable_progress_timer = true

# URL for http proxy access. (uri value)
#backup_gcs_proxy_url = <None>

# Treat X-Forwarded-For as the canonical remote address. Only enable this if
# you have a sanitizing proxy. (boolean value)
#use_forwarded_for = false

# Serial number of storage system (string value)
#hitachi_serial_number = <None>

# Name of an array unit (string value)
#hitachi_unit_name = <None>

# Pool ID of storage system (integer value)
#hitachi_pool_id = <None>

# Thin pool ID of storage system (integer value)
#hitachi_thin_pool_id = <None>

# Range of logical device of storage system (string value)
#hitachi_ldev_range = <None>

# Default copy method of storage system (string value)
#hitachi_default_copy_method = FULL

# Copy speed of storage system (integer value)
#hitachi_copy_speed = 3

# Interval to check copy (integer value)
#hitachi_copy_check_interval = 3

# Interval to check copy asynchronously (integer value)
#hitachi_async_copy_check_interval = 10

# Control port names for HostGroup or iSCSI Target (string value)
#hitachi_target_ports = <None>

# Range of group number (string value)
#hitachi_group_range = <None>

# Request for creating HostGroup or iSCSI Target (boolean value)
#hitachi_group_request = false

# Infortrend raid pool name list. It is separated with comma. (string value)
#infortrend_pools_name =

# The Infortrend CLI absolute path. By default, it is at
# /opt/bin/Infortrend/raidcmd_ESDS10.jar (string value)
#infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar

# Maximum retry time for cli. Default is 5. (integer value)
#infortrend_cli_max_retries = 5

# Default timeout for CLI copy operations in minutes. Support: migrate volume,
# create cloned volume and create volume from snapshot. By Default, it is 30
# minutes. (integer value)
#infortrend_cli_timeout = 30

# Infortrend raid channel ID list on Slot A for OpenStack usage. It is
# separated with comma. By default, it is the channel 0~7. (string value)
#infortrend_slots_a_channels_id = 0,1,2,3,4,5,6,7

# Infortrend raid channel ID list on Slot B for OpenStack usage. It is
# separated with comma. By default, it is the channel 0~7. (string value)
#infortrend_slots_b_channels_id = 0,1,2,3,4,5,6,7

# Let the volume use specific provisioning. By default, it is the full
# provisioning. The supported options are full or thin. (string value)
#infortrend_provisioning = full

# Let the volume use specific tiering level. By default, it is the level 0. The
# supported levels are 0,2,3,4. (string value)
#infortrend_tiering = 0

# DEPRECATED: Legacy configuration file for HNAS iSCSI Cinder plugin. This is
# not needed if you fill all configuration on cinder.conf (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi_conf.xml

# Whether the chap authentication is enabled in the iSCSI target or not.
# (boolean value)
#hnas_chap_enabled = true

# Service 0 iSCSI IP (IP address value)
#hnas_svc0_iscsi_ip = <None>

# Service 1 iSCSI IP (IP address value)
#hnas_svc1_iscsi_ip = <None>

# Service 2 iSCSI IP (IP address value)
#hnas_svc2_iscsi_ip = <None>

# Service 3 iSCSI IP (IP address value)
#hnas_svc3_iscsi_ip = <None>

# The name of ceph cluster (string value)
#rbd_cluster_name = ceph

# The RADOS pool where rbd volumes are stored (string value)
#rbd_pool = rbd

# The RADOS client name for accessing rbd volumes - only set when using cephx
# authentication (string value)
#rbd_user = <None>

# Path to the ceph configuration file (string value)
#rbd_ceph_conf =

# Flatten volumes created from snapshots to remove dependency from volume to
# snapshot (boolean value)
#rbd_flatten_volume_from_snapshot = false

# The libvirt uuid of the secret for the rbd_user volumes (string value)
#rbd_secret_uuid = <None>

# Directory where temporary image files are stored when the volume driver does
# not write them directly to the volume.  Warning: this option is now
# deprecated, please use image_conversion_dir instead. (string value)
#volume_tmp_dir = <None>

# Maximum number of nested volume clones that are taken before a flatten
# occurs. Set to 0 to disable cloning. (integer value)
#rbd_max_clone_depth = 5

# Volumes will be chunked into objects of this size (in megabytes). (integer
# value)
#rbd_store_chunk_size = 4

# Timeout value (in seconds) used when connecting to ceph cluster. If value <
# 0, no timeout is set and default librados value is used. (integer value)
#rados_connect_timeout = -1

# Number of retries if connection to ceph cluster failed. (integer value)
#rados_connection_retries = 3

# Interval value (in seconds) between connection retries to ceph cluster.
# (integer value)
#rados_connection_interval = 5

# The hostname (or IP address) for the storage system (string value)
#tintri_server_hostname = <None>

# User name for the storage system (string value)
#tintri_server_username = <None>

# Password for the storage system (string value)
#tintri_server_password = <None>

# API version for the storage system (string value)
#tintri_api_version = v310

# Delete unused image snapshots older than mentioned days (integer value)
#tintri_image_cache_expiry_days = 30

# Path to image nfs shares file (string value)
#tintri_image_shares_config = <None>

# Backup services use same backend. (boolean value)
#backup_use_same_host = false

# Instance numbers for HORCM (string value)
#hitachi_horcm_numbers = 200,201

# Username of storage system for HORCM (string value)
#hitachi_horcm_user = <None>

# Password of storage system for HORCM (string value)
#hitachi_horcm_password = <None>

# Add to HORCM configuration (boolean value)
#hitachi_horcm_add_conf = true

# Timeout until a resource lock is released, in seconds. The value must be
# between 0 and 7200. (integer value)
#hitachi_horcm_resource_lock_timeout = 600

# Driver to use for backups. (string value)
#backup_driver = cinder.backup.drivers.swift

# Offload pending backup delete during backup service startup. If false, the
# backup service will remain down until all pending backups are deleted.
# (boolean value)
#backup_service_inithost_offload = true

# Comma separated list of storage system storage pools for volumes. (list
# value)
#storwize_svc_volpool_name = volpool

# Storage system space-efficiency parameter for volumes (percentage) (integer
# value)
# Minimum value: -1
# Maximum value: 100
#storwize_svc_vol_rsize = 2

# Storage system threshold for volume capacity warnings (percentage) (integer
# value)
# Minimum value: -1
# Maximum value: 100
#storwize_svc_vol_warning = 0

# Storage system autoexpand parameter for volumes (True/False) (boolean value)
#storwize_svc_vol_autoexpand = true

# Storage system grain size parameter for volumes (32/64/128/256) (integer
# value)
#storwize_svc_vol_grainsize = 256

# Storage system compression option for volumes (boolean value)
#storwize_svc_vol_compression = false

# Enable Easy Tier for volumes (boolean value)
#storwize_svc_vol_easytier = true

# The I/O group in which to allocate volumes (integer value)
#storwize_svc_vol_iogrp = 0

# Maximum number of seconds to wait for FlashCopy to be prepared. (integer
# value)
# Minimum value: 1
# Maximum value: 600
#storwize_svc_flashcopy_timeout = 120

# DEPRECATED: This option no longer has any affect. It is deprecated and will
# be removed in the next release. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#storwize_svc_multihostmap_enabled = true

# Allow tenants to specify QOS on create (boolean value)
#storwize_svc_allow_tenant_qos = false

# If operating in stretched cluster mode, specify the name of the pool in which
# mirrored copies are stored.Example: "pool2" (string value)
#storwize_svc_stretched_cluster_partner = <None>

# Specifies secondary management IP or hostname to be used if san_ip is invalid
# or becomes inaccessible. (string value)
#storwize_san_secondary_ip = <None>

# Specifies that the volume not be formatted during creation. (boolean value)
#storwize_svc_vol_nofmtdisk = false

# Specifies the Storwize FlashCopy copy rate to be used when creating a full
# volume copy. The default is rate is 50, and the valid rates are 1-100.
# (integer value)
# Minimum value: 1
# Maximum value: 100
#storwize_svc_flashcopy_rate = 50

# Request for FC Zone creating HostGroup (boolean value)
#hitachi_zoning_request = false

# Number of volumes allowed per project (integer value)
#quota_volumes = 10

# Number of volume snapshots allowed per project (integer value)
#quota_snapshots = 10

# Number of consistencygroups allowed per project (integer value)
#quota_consistencygroups = 10

# Number of groups allowed per project (integer value)
#quota_groups = 10

# Total amount of storage, in gigabytes, allowed for volumes and snapshots per
# project (integer value)
#quota_gigabytes = 1000

# Number of volume backups allowed per project (integer value)
#quota_backups = 10

# Total amount of storage, in gigabytes, allowed for backups per project
# (integer value)
#quota_backup_gigabytes = 1000

# Number of seconds until a reservation expires (integer value)
#reservation_expire = 86400

# Count of reservations until usage is refreshed (integer value)
#until_refresh = 0

# Number of seconds between subsequent usage refreshes (integer value)
#max_age = 0

# Default driver to use for quota checks (string value)
#quota_driver = cinder.quota.DbQuotaDriver

# Enables or disables use of default quota class with default quota. (boolean
# value)
#use_default_quota_class = true

# Max size allowed per volume, in gigabytes (integer value)
#per_volume_size_limit = -1

# The configuration file for the Cinder Huawei driver. (string value)
#cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml

# The remote device hypermetro will use. (string value)
#hypermetro_devices = <None>

# The remote metro device san user. (string value)
#metro_san_user = <None>

# The remote metro device san password. (string value)
#metro_san_password = <None>

# The remote metro device domain name. (string value)
#metro_domain_name = <None>

# The remote metro device request url. (string value)
#metro_san_address = <None>

# The remote metro device pool names. (string value)
#metro_storage_pools = <None>

# Volume on Synology storage to be used for creating lun. (string value)
#synology_pool_name =

# Management port for Synology storage. (port value)
# Minimum value: 0
# Maximum value: 65535
#synology_admin_port = 5000

# Administrator of Synology storage. (string value)
#synology_username = admin

# Password of administrator for logging in Synology storage. (string value)
#synology_password =

# Do certificate validation or not if $driver_use_ssl is True (boolean value)
#synology_ssl_verify = true

# One time password of administrator for logging in Synology storage if OTP is
# enabled. (string value)
#synology_one_time_pass = <None>

# Device id for skip one time password check for logging in Synology storage if
# OTP is enabled. (string value)
#synology_device_id = <None>

# Storage Center System Serial Number (integer value)
#dell_sc_ssn = 64702

# Dell API port (port value)
# Minimum value: 0
# Maximum value: 65535
#dell_sc_api_port = 3033

# Name of the server folder to use on the Storage Center (string value)
#dell_sc_server_folder = openstack

# Name of the volume folder to use on the Storage Center (string value)
#dell_sc_volume_folder = openstack

# Enable HTTPS SC certificate verification (boolean value)
#dell_sc_verify_cert = false

# IP address of secondary DSM controller (string value)
#secondary_san_ip =

# Secondary DSM user name (string value)
#secondary_san_login = Admin

# Secondary DSM user password name (string value)
#secondary_san_password =

# Secondary Dell API port (port value)
# Minimum value: 0
# Maximum value: 65535
#secondary_sc_api_port = 3033

# Domain IP to be excluded from iSCSI returns. (IP address value)
#excluded_domain_ip = <None>

# Server OS type to use when creating a new server on the Storage Center.
# (string value)
#dell_server_os = Red Hat Linux 6.x

# Which filter class names to use for filtering hosts when not specified in the
# request. (list value)
#scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter

# Which weigher class names to use for weighing hosts. (list value)
#scheduler_default_weighers = CapacityWeigher

# Which handler to use for selecting the host/pool after weighing (string
# value)
#scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler

# Default scheduler driver to use (string value)
#scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler

# Base dir containing mount point for NFS share. (string value)
#backup_mount_point_base = $state_path/backup_mount

# NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format.
# (string value)
#backup_share = <None>

# Mount options passed to the NFS client. See NFS man page for details. (string
# value)
#backup_mount_options = <None>

# IP address/hostname of Blockbridge API. (string value)
#blockbridge_api_host = <None>

# Override HTTPS port to connect to Blockbridge API server. (integer value)
#blockbridge_api_port = <None>

# Blockbridge API authentication scheme (token or password) (string value)
# Allowed values: token, password
#blockbridge_auth_scheme = token

# Blockbridge API token (for auth scheme 'token') (string value)
#blockbridge_auth_token = <None>

# Blockbridge API user (for auth scheme 'password') (string value)
#blockbridge_auth_user = <None>

# Blockbridge API password (for auth scheme 'password') (string value)
#blockbridge_auth_password = <None>

# Defines the set of exposed pools and their associated backend query strings
# (dict value)
#blockbridge_pools = OpenStack:+openstack

# Default pool name if unspecified. (string value)
#blockbridge_default_pool = <None>

# Absolute path to scheduler configuration JSON file. (string value)
#scheduler_json_config_location =

# Data path IP address (string value)
#zfssa_data_ip = <None>

# HTTPS port number (string value)
#zfssa_https_port = 443

# Options to be passed while mounting share over nfs (string value)
#zfssa_nfs_mount_options =

# Storage pool name. (string value)
#zfssa_nfs_pool =

# Project name. (string value)
#zfssa_nfs_project = NFSProject

# Share name. (string value)
#zfssa_nfs_share = nfs_share

# Data compression. (string value)
# Allowed values: off, lzjb, gzip-2, gzip, gzip-9
#zfssa_nfs_share_compression = off

# Synchronous write bias-latency, throughput. (string value)
# Allowed values: latency, throughput
#zfssa_nfs_share_logbias = latency

# Name of directory inside zfssa_nfs_share where cache volumes are stored.
# (string value)
#zfssa_cache_directory = os-cinder-cache

# The flag of thin storage allocation. (boolean value)
#dsware_isthin = false

# Fusionstorage manager ip addr for cinder-volume. (string value)
#dsware_manager =

# Fusionstorage agent ip addr range. (string value)
#fusionstorageagent =

# Pool type, like sata-2copy. (string value)
#pool_type = default

# Pool id permit to use. (list value)
#pool_id_filter =

# Create clone volume timeout. (integer value)
#clone_volume_timeout = 680

# DEPRECATED: If volume-type name contains this substring nodedup volume will
# be created, otherwise dedup volume wil be created. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option is deprecated in favour of 'kaminario:thin_prov_type' in
# extra-specs and will be removed in the next release.
#kaminario_nodedup_substring = K2-nodedup

# The IP of DMS client socket server (IP address value)
#disco_client = 127.0.0.1

# The port to connect DMS client socket server (port value)
# Minimum value: 0
# Maximum value: 65535
#disco_client_port = 9898

# Path to the wsdl file to communicate with DISCO request manager (string
# value)
#disco_wsdl_path = /etc/cinder/DISCOService.wsdl

# Prefix before volume name to differentiate DISCO volume created through
# openstack and the other ones (string value)
#volume_name_prefix = openstack-

# How long we check whether a snapshot is finished before we give up (integer
# value)
#snapshot_check_timeout = 3600

# How long we check whether a restore is finished before we give up (integer
# value)
#restore_check_timeout = 3600

# How long we check whether a clone is finished before we give up (integer
# value)
#clone_check_timeout = 3600

# How long we wait before retrying to get an item detail (integer value)
#retry_interval = 1

# Space network name to use for data transfer (string value)
#hgst_net = Net 1 (IPv4)

# Comma separated list of Space storage servers:devices. ex:
# os1_stor:gbd0,os2_stor:gbd0 (string value)
#hgst_storage_servers = os:gbd0

# Should spaces be redundantly stored (1/0) (string value)
#hgst_redundancy = 0

# User to own created spaces (string value)
#hgst_space_user = root

# Group to own created spaces (string value)
#hgst_space_group = disk

# UNIX mode for created spaces (string value)
#hgst_space_mode = 0600

# message minimum life in seconds. (integer value)
#message_ttl = 2592000

# Directory used for temporary storage during image conversion (string value)
#image_conversion_dir = $state_path/conversion

# Match this value when searching for nova in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type>
# (string value)
#nova_catalog_info = compute:Compute Service:publicURL

# Same as nova_catalog_info, but for admin endpoint. (string value)
#nova_catalog_admin_info = compute:Compute Service:adminURL

# Override service catalog lookup with template for nova endpoint e.g.
# http://localhost:8774/v2/%(project_id)s (string value)
#nova_endpoint_template = <None>

# Same as nova_endpoint_template, but for admin endpoint. (string value)
#nova_endpoint_admin_template = <None>

# Region name of this node (string value)
#os_region_name = <None>

# Location of ca certificates file to use for nova client requests. (string
# value)
#nova_ca_certificates_file = <None>

# Allow to perform insecure SSL requests to nova (boolean value)
#nova_api_insecure = false

# DEPRECATED: This option no longer has any affect. It is deprecated and will
# be removed in the next release. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#flashsystem_multipath_enabled = false

# DPL pool uuid in which DPL volumes are stored. (string value)
#dpl_pool =

# DPL port number. (port value)
# Minimum value: 0
# Maximum value: 65535
#dpl_port = 8357

# Request for FC Zone creating host group (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_zoning_request
#hpexp_zoning_request = false

# Type of storage command line interface (string value)
# Deprecated group/name - [DEFAULT]/hpxp_storage_cli
#hpexp_storage_cli = <None>

# ID of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_storage_id
#hpexp_storage_id = <None>

# Pool of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_pool
#hpexp_pool = <None>

# Thin pool of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_thin_pool
#hpexp_thin_pool = <None>

# Logical device range of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_ldev_range
#hpexp_ldev_range = <None>

# Default copy method of storage system. There are two valid values: "FULL"
# specifies that a full copy; "THIN" specifies that a thin copy. Default value
# is "FULL" (string value)
# Deprecated group/name - [DEFAULT]/hpxp_default_copy_method
#hpexp_default_copy_method = FULL

# Copy speed of storage system (integer value)
# Deprecated group/name - [DEFAULT]/hpxp_copy_speed
#hpexp_copy_speed = 3

# Interval to check copy (integer value)
# Deprecated group/name - [DEFAULT]/hpxp_copy_check_interval
#hpexp_copy_check_interval = 3

# Interval to check copy asynchronously (integer value)
# Deprecated group/name - [DEFAULT]/hpxp_async_copy_check_interval
#hpexp_async_copy_check_interval = 10

# Target port names for host group or iSCSI target (list value)
# Deprecated group/name - [DEFAULT]/hpxp_target_ports
#hpexp_target_ports = <None>

# Target port names of compute node for host group or iSCSI target (list value)
# Deprecated group/name - [DEFAULT]/hpxp_compute_target_ports
#hpexp_compute_target_ports = <None>

# Request for creating host group or iSCSI target (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_group_request
#hpexp_group_request = false

# Instance numbers for HORCM (list value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_numbers
#hpexp_horcm_numbers = 200,201

# Username of storage system for HORCM (string value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_user
#hpexp_horcm_user = <None>

# Add to HORCM configuration (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_add_conf
#hpexp_horcm_add_conf = true

# Resource group name of storage system for HORCM (string value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_resource_name
#hpexp_horcm_resource_name = meta_resource

# Only discover a specific name of host group or iSCSI target (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_name_only_discovery
#hpexp_horcm_name_only_discovery = false

# Add CHAP user (boolean value)
#hitachi_add_chap_user = false

# iSCSI authentication method (string value)
#hitachi_auth_method = <None>

# iSCSI authentication username (string value)
#hitachi_auth_user = HBSD-CHAP-user

# iSCSI authentication password (string value)
#hitachi_auth_password = HBSD-CHAP-password

# Driver to use for volume creation (string value)
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

# Timeout for creating the volume to migrate to when performing volume
# migration (seconds) (integer value)
#migration_create_volume_timeout_secs = 300

# Offload pending volume delete during volume service startup (boolean value)
#volume_service_inithost_offload = false

# FC Zoning mode configured (string value)
#zoning_mode = <None>

# User defined capabilities, a JSON formatted string specifying key/value
# pairs. The key/value pairs can be used by the CapabilitiesFilter to select
# between backends when requests specify volume types. For example, specifying
# a service level or the geographical location of a backend, then creating a
# volume type to allow the user to select by these different properties.
# (string value)
#extra_capabilities = {}

# Suppress requests library SSL certificate warnings. (boolean value)
#suppress_requests_ssl_warnings = false

# Default iSCSI Port ID of FlashSystem. (Default port is 0.) (integer value)
#flashsystem_iscsi_portid = 0

# Create volumes in this pool (string value)
#tegile_default_pool = <None>

# Create volumes in this project (string value)
#tegile_default_project = <None>

# Connection protocol should be FC. (Default is FC.) (string value)
#flashsystem_connection_protocol = FC

# Allows vdisk to multi host mapping. (Default is True) (boolean value)
#flashsystem_multihostmap_enabled = true

# Enables the Force option on upload_to_image. This enables running
# upload_volume on in-use volumes for backends that support it. (boolean value)
#enable_force_upload = false

# Create volume from snapshot at the host where snapshot resides (boolean
# value)
#snapshot_same_host = true

# Ensure that the new volumes are the same AZ as snapshot or source volume
# (boolean value)
#cloned_volume_same_az = true

# Cache volume availability zones in memory for the provided duration in
# seconds (integer value)
#az_cache_duration = 3600

# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 (string value)
# Deprecated group/name - [DEFAULT]/hp3par_api_url
#hpe3par_api_url =

# 3PAR username with the 'edit' role (string value)
# Deprecated group/name - [DEFAULT]/hp3par_username
#hpe3par_username =

# 3PAR password for the user specified in hpe3par_username (string value)
# Deprecated group/name - [DEFAULT]/hp3par_password
#hpe3par_password =

# List of the CPG(s) to use for volume creation (list value)
# Deprecated group/name - [DEFAULT]/hp3par_cpg
#hpe3par_cpg = OpenStack

# The CPG to use for Snapshots for volumes. If empty the userCPG will be used.
# (string value)
# Deprecated group/name - [DEFAULT]/hp3par_cpg_snap
#hpe3par_cpg_snap =

# The time in hours to retain a snapshot.  You can't delete it before this
# expires. (string value)
# Deprecated group/name - [DEFAULT]/hp3par_snapshot_retention
#hpe3par_snapshot_retention =

# The time in hours when a snapshot expires  and is deleted.  This must be
# larger than expiration (string value)
# Deprecated group/name - [DEFAULT]/hp3par_snapshot_expiration
#hpe3par_snapshot_expiration =

# Enable HTTP debugging to 3PAR (boolean value)
# Deprecated group/name - [DEFAULT]/hp3par_debug
#hpe3par_debug = false

# List of target iSCSI addresses to use. (list value)
# Deprecated group/name - [DEFAULT]/hp3par_iscsi_ips
#hpe3par_iscsi_ips =

# Enable CHAP authentication for iSCSI connections. (boolean value)
# Deprecated group/name - [DEFAULT]/hp3par_iscsi_chap_enabled
#hpe3par_iscsi_chap_enabled = false

# Datera API port. (string value)
#datera_api_port = 7717

# Datera API version. (string value)
#datera_api_version = 2

# DEPRECATED: Number of replicas to create of an inode. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#datera_num_replicas = 3

# Timeout for HTTP 503 retry messages (integer value)
#datera_503_timeout = 120

# Interval between 503 retries (integer value)
#datera_503_interval = 5

# True to set function arg and return logging (boolean value)
#datera_debug = false

# DEPRECATED: True to set acl 'allow_all' on volumes created (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#datera_acl_allow_all = false

# ONLY FOR DEBUG/TESTING PURPOSES
# True to set replica_count to 1 (boolean value)
#datera_debug_replica_count_override = false

# VPSA - Use ISER instead of iSCSI (boolean value)
#zadara_use_iser = true

# VPSA - Management Host name or IP address (string value)
#zadara_vpsa_host = <None>

# VPSA - Port number (port value)
# Minimum value: 0
# Maximum value: 65535
#zadara_vpsa_port = <None>

# VPSA - Use SSL connection (boolean value)
#zadara_vpsa_use_ssl = false

# VPSA - Username (string value)
#zadara_user = <None>

# VPSA - Password (string value)
#zadara_password = <None>

# VPSA - Storage Pool assigned for volumes (string value)
#zadara_vpsa_poolname = <None>

# VPSA - Default encryption policy for volumes (boolean value)
#zadara_vol_encrypt = false

# VPSA - Default template for VPSA volume names (string value)
#zadara_vol_name_template = OS_%s

# VPSA - Attach snapshot policy for volumes (boolean value)
#zadara_default_snap_policy = false

# List of all available devices (list value)
#available_devices =

# URL to the Quobyte volume e.g., quobyte://<DIR host>/<volume name> (string
# value)
#quobyte_volume_url = <None>

# Path to a Quobyte Client configuration file. (string value)
#quobyte_client_cfg = <None>

# Create volumes as sparse files which take no space. If set to False, volume
# is created as regular file.In such case volume creation takes a lot of time.
# (boolean value)
#quobyte_sparsed_volumes = true

# Create volumes as QCOW2 files rather than raw files. (boolean value)
#quobyte_qcow2_volumes = true

# Base dir containing the mount point for the Quobyte volume. (string value)
#quobyte_mount_point_base = $state_path/mnt

# File with the list of available vzstorage shares. (string value)
#vzstorage_shares_config = /etc/cinder/vzstorage_shares

# Create volumes as sparsed files which take no space rather than regular files
# when using raw format, in which case volume creation takes lot of time.
# (boolean value)
#vzstorage_sparsed_volumes = true

# Percent of ACTUAL usage of the underlying volume before no new volumes can be
# allocated to the volume destination. (floating point value)
#vzstorage_used_ratio = 0.95

# Base dir containing mount points for vzstorage shares. (string value)
#vzstorage_mount_point_base = $state_path/mnt

# Mount options passed to the vzstorage client. See section of the pstorage-
# mount man page for details. (list value)
#vzstorage_mount_options = <None>

# Default format that will be used when creating volumes if no volume format is
# specified. (string value)
#vzstorage_default_volume_format = raw

# File with the list of available NFS shares (string value)
#nfs_shares_config = /etc/cinder/nfs_shares

# Create volumes as sparsed files which take no space.If set to False volume is
# created as regular file.In such case volume creation takes a lot of time.
# (boolean value)
#nfs_sparsed_volumes = true

# Base dir containing mount points for NFS shares. (string value)
#nfs_mount_point_base = $state_path/mnt

# Mount options passed to the NFS client. See section of the NFS man page for
# details. (string value)
#nfs_mount_options = <None>

# The number of attempts to mount NFS shares before raising an error.  At least
# one attempt will be made to mount an NFS share, regardless of the value
# specified. (integer value)
#nfs_mount_attempts = 3

#
# From oslo.config
#

# Path to a config file to use. Multiple config files can be specified, with
# values in later files taking precedence. Defaults to %(default)s. (unknown
# value)
#config_file = ~/.project/project.conf,~/project.conf,/etc/project/project.conf,/etc/project.conf

# Path to a config directory to pull *.conf files from. This file set is
# sorted, so as to provide a predictable parse order if individual options are
# over-ridden. The set is parsed after the file(s) specified via previous
# --config-file, arguments hence over-ridden options in the directory take
# precedence. (list value)
#config_dir = <None>

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false

# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>

# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit

# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack

#
# From oslo.service.periodic_task
#

# Some periodic tasks can be run in a separate process. Should we run them
# here? (boolean value)
#run_external_periodic_tasks = true

#
# From oslo.service.service
#

# Enable eventlet backdoor.  Acceptable values are 0, <port>, and
# <start>:<end>, where 0 results in listening on a random tcp port number;
# <port> results in listening on the specified port number (and not enabling
# backdoor if that port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range of port numbers.
# The chosen port is displayed in the service's log file. (string value)
#backdoor_port = <None>

# Enable eventlet backdoor, using the provided path as a unix socket that can
# receive connections. This option is mutually exclusive with 'backdoor_port'
# in that only one should be provided. If both are provided then the existence
# of this option overrides the usage of that option. (string value)
#backdoor_socket = <None>

# Enables or disables logging values of all registered options when starting a
# service (at DEBUG level). (boolean value)
#log_options = true

# Specify a timeout after which a gracefully shutdown server will exit. Zero
# value means endless wait. (integer value)
#graceful_shutdown_timeout = 60

#
# From oslo.service.wsgi
#

# File name for the paste.deploy config for api service (string value)
#api_paste_config = api-paste.ini

# A python format string that is used as the template to generate log lines.
# The following values can beformatted into it: client_ip, date_time,
# request_line, status_code, body_length, wall_seconds. (string value)
#wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f

# Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not
# supported on OS X. (integer value)
#tcp_keepidle = 600

# Size of the pool of greenthreads used by wsgi (integer value)
#wsgi_default_pool_size = 100

# Maximum line size of message headers to be accepted. max_header_line may need
# to be increased when using large tokens (typically those generated when
# keystone is configured to use PKI tokens with big service catalogs). (integer
# value)
#max_header_line = 16384

# If False, closes the client socket connection explicitly. (boolean value)
#wsgi_keep_alive = true

# Timeout for client connections' socket operations. If an incoming connection
# is idle for this number of seconds it will be closed. A value of '0' means
# wait forever. (integer value)
#client_socket_timeout = 900


[BACKEND]

#
# From cinder
#

# Backend override of host value. (string value)
# Deprecated group/name - [BACKEND]/host
#backend_host = <None>


[BRCD_FABRIC_EXAMPLE]

#
# From cinder
#

# South bound connector for the fabric. (string value)
# Allowed values: SSH, HTTP, HTTPS
#fc_southbound_protocol = HTTP

# Management IP of fabric. (string value)
#fc_fabric_address =

# Fabric user ID. (string value)
#fc_fabric_user =

# Password for user. (string value)
#fc_fabric_password =

# Connecting port (port value)
# Minimum value: 0
# Maximum value: 65535
#fc_fabric_port = 22

# Local SSH certificate Path. (string value)
#fc_fabric_ssh_cert_path =

# Overridden zoning policy. (string value)
#zoning_policy = initiator-target

# Overridden zoning activation state. (boolean value)
#zone_activate = true

# Overridden zone name prefix. (string value)
#zone_name_prefix = openstack

# Virtual Fabric ID. (string value)
#fc_virtual_fabric_id = <None>

# DEPRECATED: Principal switch WWN of the fabric. This option is not used
# anymore. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#principal_switch_wwn = <None>


[CISCO_FABRIC_EXAMPLE]

#
# From cinder
#

# Management IP of fabric (string value)
#cisco_fc_fabric_address =

# Fabric user ID (string value)
#cisco_fc_fabric_user =

# Password for user (string value)
#cisco_fc_fabric_password =

# Connecting port (port value)
# Minimum value: 0
# Maximum value: 65535
#cisco_fc_fabric_port = 22

# overridden zoning policy (string value)
#cisco_zoning_policy = initiator-target

# overridden zoning activation state (boolean value)
#cisco_zone_activate = true

# overridden zone name prefix (string value)
#cisco_zone_name_prefix = <None>

# VSAN of the Fabric (string value)
#cisco_zoning_vsan = <None>


[COORDINATION]

#
# From cinder
#

# The backend URL to use for distributed coordination. (string value)
#backend_url = file://$state_path

# Number of seconds between heartbeats for distributed coordination. (floating
# point value)
#heartbeat = 1.0

# Initial number of seconds to wait after failed reconnection. (floating point
# value)
#initial_reconnect_backoff = 0.1

# Maximum number of seconds between sequential reconnection retries. (floating
# point value)
#max_reconnect_backoff = 60.0


[FC-ZONE-MANAGER]

#
# From cinder
#

# South bound connector for zoning operation (string value)
#brcd_sb_connector = HTTP

# FC Zone Driver responsible for zone management (string value)
#zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver

# Zoning policy configured by user; valid values include "initiator-target" or
# "initiator" (string value)
#zoning_policy = initiator-target

# Comma separated list of Fibre Channel fabric names. This list of names is
# used to retrieve other SAN credentials for connecting to each SAN fabric
# (string value)
#fc_fabric_names = <None>

# FC SAN Lookup Service (string value)
#fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService

# Set this to True when you want to allow an unsupported zone manager driver to
# start.  Drivers that haven't maintained a working CI system and testing are
# marked as unsupported until CI is working again.  This also marks a driver as
# deprecated and may be removed in the next release. (boolean value)
#enable_unsupported_driver = false

# Southbound connector for zoning operation (string value)
#cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI


[KEY_MANAGER]

#
# From cinder
#

# Fixed key returned by key manager, specified in hex (string value)
# Deprecated group/name - [keymgr]/fixed_key
#fixed_key = <None>


[barbican]

#
# From castellan.config
#

# Use this endpoint to connect to Barbican, for example:
# "http://localhost:9311/" (string value)
#barbican_endpoint = <None>

# Version of the Barbican API, for example: "v1" (string value)
#barbican_api_version = <None>

# Use this endpoint to connect to Keystone (string value)
#auth_endpoint = http://localhost:5000/v3

# Number of seconds to wait before retrying poll for key creation completion
# (integer value)
#retry_delay = 1

# Number of times to retry poll for key creation completion (integer value)
#number_of_retries = 60


[cors]

#
# From oslo.middleware
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-API-Version

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH,HEAD

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID,X-Trace-Info,X-Trace-HMAC,OpenStack-API-Version


[cors.subdomain]

#
# From oslo.middleware
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-API-Version

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH,HEAD

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID,X-Trace-Info,X-Trace-HMAC,OpenStack-API-Version


[database]

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20


[key_manager]

#
# From castellan.config
#

# The full class name of the key manager API class (string value)
#api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager

# The type of authentication credential to create. Possible values are 'token',
# 'password', 'keystone_token', and 'keystone_password'. Required if no context
# is passed to the credential factory. (string value)
#auth_type = <None>

# Token for authentication. Required for 'token' and 'keystone_token' auth_type
# if no context is passed to the credential factory. (string value)
#token = <None>

# Username for authentication. Required for 'password' auth_type. Optional for
# the 'keystone_password' auth_type. (string value)
#username = <None>

# Password for authentication. Required for 'password' and 'keystone_password'
# auth_type. (string value)
#password = <None>

# User ID for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_id = <None>

# User's domain ID for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_domain_id = <None>

# User's domain name for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_domain_name = <None>

# Trust ID for trust scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#trust_id = <None>

# Domain ID for domain scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#domain_id = <None>

# Domain name for domain scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#domain_name = <None>

# Project ID for project scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_id = <None>

# Project name for project scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_name = <None>

# Project's domain ID for project. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_domain_id = <None>

# Project's domain name for project. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_domain_name = <None>

# Allow fetching a new token if the current one is going to expire. Optional
# for 'keystone_token' and 'keystone_password' auth_type. (boolean value)
#reauthenticate = true


[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should  ideally be unversioned, client support in the
# wild varies.  If you're using a versioned v2 endpoint here, then this  should
# *not* be the same endpoint the service user utilizes  for validating tokens,
# because normal end users may not be  able to reach that endpoint. (string
# value)
#auth_uri = <None>

# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>

# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>

# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3

# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>

# Required if identity server requires client certificate (string value)
#certfile = <None>

# Required if identity server requires client certificate (string value)
#keyfile = <None>

# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# The region in which the identity server can be found. (string value)
#region_name = <None>

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>

# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>

# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance. Only
# valid for PKI tokens. (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None

# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>

# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5

# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>


[matchmaker_redis]

#
# From oslo.messaging
#

# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1

# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379

# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =

# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =

# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq

# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000

# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000

# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000


[oslo_concurrency]

#
# From oslo.concurrency
#

# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false

# Directory to use for lock files.  For security, the specified directory
# should only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false

# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =

# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>

# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false

# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =

# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =

# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic

# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin

# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost

# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672

# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false

# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest

# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest

# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN

# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30

# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except  those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false

# Maximum number of channels to allow (integer value)
#channel_max = <None>

# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>

# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3

# Enable SSL (boolean value)
#ssl = <None>

# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>

# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25

# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25

# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25

# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single

# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30

# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0

# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30

# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600

# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60

# Persist notification messages. (boolean value)
#notification_persistence = false

# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25

# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60

# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc

# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply

# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100

# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25

# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25


[oslo_messaging_zmq]

#
# From oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false


[oslo_middleware]

#
# From oslo.middleware
#

# The maximum body size for each  request, in bytes. (integer value)
# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
# Deprecated group/name - [DEFAULT]/max_request_body_size
#max_request_body_size = 114688

# DEPRECATED: The HTTP Header that will be used to determine what the original
# request protocol scheme was, even if it was hidden by a SSL termination
# proxy. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#secure_proxy_ssl_header = X-Forwarded-Proto

# Whether the application is behind a proxy or not. This determines if the
# middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d


[oslo_reports]

#
# From oslo.reports
#

# Path to a log directory where to create a file (string value)
#log_dir = <None>

# The path to a file to watch for changes to trigger the reports, instead of
# signals. Setting this option disables the signal trigger for the reports. If
# application is running as a WSGI application it is recommended to use this
# instead of signals. (string value)
#file_event_handler = <None>

# How many seconds to wait between polls when file_event_handler is set
# (integer value)
#file_event_handler_interval = 1


[oslo_versionedobjects]

#
# From oslo.versionedobjects
#

# Make exception message format errors fatal (boolean value)
#fatal_exception_format_errors = false


[ssl]

#
# From oslo.service.sslutils
#

# CA certificate file to use to verify connecting clients. (string value)
# Deprecated group/name - [DEFAULT]/ssl_ca_file
#ca_file = <None>

# Certificate file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_cert_file
#cert_file = <None>

# Private key file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_key_file
#key_file = <None>

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
#version = <None>

# Sets the list of available ciphers. value should be a string in the OpenSSL
# cipher list format. (string value)
#ciphers = <None>
api-paste.ini

Use the api-paste.ini file to configure the Block Storage API service.

#############
# OpenStack #
#############

[composite:osapi_volume]
use = call:cinder.api:root_app_factory
/: apiversions
/v1: openstack_volume_api_v1
/v2: openstack_volume_api_v2
/v3: openstack_volume_api_v3

[composite:openstack_volume_api_v1]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv1
keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1
keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1

[composite:openstack_volume_api_v2]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv2
keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv2
keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv2

[composite:openstack_volume_api_v3]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv3
keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv3
keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv3

[filter:request_id]
paste.filter_factory = oslo_middleware.request_id:RequestId.factory

[filter:http_proxy_to_wsgi]
paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory

[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = cinder

[filter:faultwrap]
paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory

[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory

[filter:noauth]
paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory

[filter:sizelimit]
paste.filter_factory = oslo_middleware.sizelimit:RequestBodySizeLimiter.factory

[app:apiv1]
paste.app_factory = cinder.api.v1.router:APIRouter.factory

[app:apiv2]
paste.app_factory = cinder.api.v2.router:APIRouter.factory

[app:apiv3]
paste.app_factory = cinder.api.v3.router:APIRouter.factory

[pipeline:apiversions]
pipeline = cors http_proxy_to_wsgi faultwrap osvolumeversionapp

[app:osvolumeversionapp]
paste.app_factory = cinder.api.versions:Versions.factory

##########
# Shared #
##########

[filter:keystonecontext]
paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
policy.json

The policy.json file defines additional access controls that apply to the Block Storage service.

{
    "context_is_admin": "role:admin",
    "admin_or_owner":  "is_admin:True or project_id:%(project_id)s",
    "default": "rule:admin_or_owner",

    "admin_api": "is_admin:True",

    "volume:create": "",
    "volume:delete": "rule:admin_or_owner",
    "volume:get": "rule:admin_or_owner",
    "volume:get_all": "rule:admin_or_owner",
    "volume:get_volume_metadata": "rule:admin_or_owner",
    "volume:create_volume_metadata": "rule:admin_or_owner",
    "volume:delete_volume_metadata": "rule:admin_or_owner",
    "volume:update_volume_metadata": "rule:admin_or_owner",
    "volume:get_volume_admin_metadata": "rule:admin_api",
    "volume:update_volume_admin_metadata": "rule:admin_api",
    "volume:get_snapshot": "rule:admin_or_owner",
    "volume:get_all_snapshots": "rule:admin_or_owner",
    "volume:create_snapshot": "rule:admin_or_owner",
    "volume:delete_snapshot": "rule:admin_or_owner",
    "volume:update_snapshot": "rule:admin_or_owner",
    "volume:get_snapshot_metadata": "rule:admin_or_owner",
    "volume:delete_snapshot_metadata": "rule:admin_or_owner",
    "volume:update_snapshot_metadata": "rule:admin_or_owner",
    "volume:extend": "rule:admin_or_owner",
    "volume:update_readonly_flag": "rule:admin_or_owner",
    "volume:retype": "rule:admin_or_owner",
    "volume:update": "rule:admin_or_owner",

    "volume_extension:types_manage": "rule:admin_api",
    "volume_extension:types_extra_specs": "rule:admin_api",
    "volume_extension:access_types_qos_specs_id": "rule:admin_api",
    "volume_extension:access_types_extra_specs": "rule:admin_api",
    "volume_extension:volume_type_access": "rule:admin_or_owner",
    "volume_extension:volume_type_access:addProjectAccess": "rule:admin_api",
    "volume_extension:volume_type_access:removeProjectAccess": "rule:admin_api",
    "volume_extension:volume_type_encryption": "rule:admin_api",
    "volume_extension:volume_encryption_metadata": "rule:admin_or_owner",
    "volume_extension:extended_snapshot_attributes": "rule:admin_or_owner",
    "volume_extension:volume_image_metadata": "rule:admin_or_owner",

    "volume_extension:quotas:show": "",
    "volume_extension:quotas:update": "rule:admin_api",
    "volume_extension:quotas:delete": "rule:admin_api",
    "volume_extension:quota_classes": "rule:admin_api",
    "volume_extension:quota_classes:validate_setup_for_nested_quota_use": "rule:admin_api",

    "volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
    "volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
    "volume_extension:backup_admin_actions:reset_status": "rule:admin_api",
    "volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
    "volume_extension:volume_admin_actions:force_detach": "rule:admin_api",
    "volume_extension:snapshot_admin_actions:force_delete": "rule:admin_api",
    "volume_extension:backup_admin_actions:force_delete": "rule:admin_api",
    "volume_extension:volume_admin_actions:migrate_volume": "rule:admin_api",
    "volume_extension:volume_admin_actions:migrate_volume_completion": "rule:admin_api",

    "volume_extension:volume_actions:upload_public": "rule:admin_api",
    "volume_extension:volume_actions:upload_image": "rule:admin_or_owner",

    "volume_extension:volume_host_attribute": "rule:admin_api",
    "volume_extension:volume_tenant_attribute": "rule:admin_or_owner",
    "volume_extension:volume_mig_status_attribute": "rule:admin_api",
    "volume_extension:hosts": "rule:admin_api",
    "volume_extension:services:index": "rule:admin_api",
    "volume_extension:services:update" : "rule:admin_api",

    "volume_extension:volume_manage": "rule:admin_api",
    "volume_extension:volume_unmanage": "rule:admin_api",
    "volume_extension:list_manageable": "rule:admin_api",

    "volume_extension:capabilities": "rule:admin_api",

    "volume:create_transfer": "rule:admin_or_owner",
    "volume:accept_transfer": "",
    "volume:delete_transfer": "rule:admin_or_owner",
    "volume:get_transfer": "rule:admin_or_owner",
    "volume:get_all_transfers": "rule:admin_or_owner",

    "volume_extension:replication:promote": "rule:admin_api",
    "volume_extension:replication:reenable": "rule:admin_api",

    "volume:failover_host": "rule:admin_api",
    "volume:freeze_host": "rule:admin_api",
    "volume:thaw_host": "rule:admin_api",

    "backup:create" : "",
    "backup:delete": "rule:admin_or_owner",
    "backup:get": "rule:admin_or_owner",
    "backup:get_all": "rule:admin_or_owner",
    "backup:restore": "rule:admin_or_owner",
    "backup:backup-import": "rule:admin_api",
    "backup:backup-export": "rule:admin_api",
    "backup:update": "rule:admin_or_owner",

    "snapshot_extension:snapshot_actions:update_snapshot_status": "",
    "snapshot_extension:snapshot_manage": "rule:admin_api",
    "snapshot_extension:snapshot_unmanage": "rule:admin_api",
    "snapshot_extension:list_manageable": "rule:admin_api",

    "consistencygroup:create" : "group:nobody",
    "consistencygroup:delete": "group:nobody",
    "consistencygroup:update": "group:nobody",
    "consistencygroup:get": "group:nobody",
    "consistencygroup:get_all": "group:nobody",

    "consistencygroup:create_cgsnapshot" : "group:nobody",
    "consistencygroup:delete_cgsnapshot": "group:nobody",
    "consistencygroup:get_cgsnapshot": "group:nobody",
    "consistencygroup:get_all_cgsnapshots": "group:nobody",

    "group:group_types_manage": "rule:admin_api",
    "group:group_types_specs": "rule:admin_api",
    "group:access_group_types_specs": "rule:admin_api",
    "group:group_type_access": "rule:admin_or_owner",

    "group:create" : "",
    "group:delete": "rule:admin_or_owner",
    "group:update": "rule:admin_or_owner",
    "group:get": "rule:admin_or_owner",
    "group:get_all": "rule:admin_or_owner",

    "group:create_group_snapshot": "",
    "group:delete_group_snapshot": "rule:admin_or_owner",
    "group:update_group_snapshot": "rule:admin_or_owner",
    "group:get_group_snapshot": "rule:admin_or_owner",
    "group:get_all_group_snapshots": "rule:admin_or_owner",

    "scheduler_extension:scheduler_stats:get_pools" : "rule:admin_api",
    "message:delete": "rule:admin_or_owner",
    "message:get": "rule:admin_or_owner",
    "message:get_all": "rule:admin_or_owner",

    "clusters:get": "rule:admin_api",
    "clusters:get_all": "rule:admin_api",
    "clusters:update": "rule:admin_api"
}
rootwrap.conf

The rootwrap.conf file defines configuration values used by the rootwrap script when the Block Storage service must escalate its privileges to those of the root user.

# Configuration for cinder-rootwrap
# This file should be owned by (and only-writeable by) the root user

[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap

# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin

# Enable logging to syslog
# Default value is False
use_syslog=False

# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog

# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR

New, updated, and deprecated options in Newton for Block Storage

New options
Option = default value (Type) Help string
[DEFAULT] additional_retry_list = (StrOpt) FSS additional retry list, separate by ;
[DEFAULT] backup_swift_project = None (StrOpt) Swift project/account name. Required when connecting to an auth 3.0 system
[DEFAULT] backup_swift_project_domain = None (StrOpt) Swift project domain name. Required when connecting to an auth 3.0 system
[DEFAULT] backup_swift_user_domain = None (StrOpt) Swift user domain name. Required when connecting to an auth 3.0 system
[DEFAULT] backup_use_temp_snapshot = False (BoolOpt) If this is set to True, the backup_use_temp_snapshot path will be used during the backup. Otherwise, it will use backup_use_temp_volume path.
[DEFAULT] chap = disabled (StrOpt) CHAP authentication mode, effective only for iscsi (disabled|enabled)
[DEFAULT] clone_volume_timeout = 680 (IntOpt) Create clone volume timeout.
[DEFAULT] cluster = None (StrOpt) Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. Active-Active is not yet supported.
[DEFAULT] connection_type = iscsi (StrOpt) Connection type to the IBM Storage Array
[DEFAULT] coprhd_emulate_snapshot = False (BoolOpt) True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
[DEFAULT] coprhd_hostname = None (StrOpt) Hostname for the CoprHD Instance
[DEFAULT] coprhd_password = None (StrOpt) Password for accessing the CoprHD Instance
[DEFAULT] coprhd_port = 4443 (PortOpt) Port for the CoprHD Instance
[DEFAULT] coprhd_project = None (StrOpt) Project to utilize within the CoprHD Instance
[DEFAULT] coprhd_scaleio_rest_gateway_host = None (StrOpt) Rest Gateway IP or FQDN for Scaleio
[DEFAULT] coprhd_scaleio_rest_gateway_port = 4984 (PortOpt) Rest Gateway Port for Scaleio
[DEFAULT] coprhd_scaleio_rest_server_password = None (StrOpt) Rest Gateway Password
[DEFAULT] coprhd_scaleio_rest_server_username = None (StrOpt) Username for Rest Gateway
[DEFAULT] coprhd_tenant = None (StrOpt) Tenant to utilize within the CoprHD Instance
[DEFAULT] coprhd_username = None (StrOpt) Username for accessing the CoprHD Instance
[DEFAULT] coprhd_varray = None (StrOpt) Virtual Array to utilize within the CoprHD Instance
[DEFAULT] datera_503_interval = 5 (IntOpt) Interval between 503 retries
[DEFAULT] datera_503_timeout = 120 (IntOpt) Timeout for HTTP 503 retry messages
[DEFAULT] datera_acl_allow_all = False (BoolOpt) True to set acl ‘allow_all’ on volumes created
[DEFAULT] datera_debug = False (BoolOpt) True to set function arg and return logging
[DEFAULT] datera_debug_replica_count_override = False (BoolOpt) ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1
[DEFAULT] default_group_type = None (StrOpt) Default group type to use
[DEFAULT] dell_server_os = Red Hat Linux 6.x (StrOpt) Server OS type to use when creating a new server on the Storage Center.
[DEFAULT] drbdmanage_disk_options = {"c-min-rate": "4M"} (StrOpt) Disk options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
[DEFAULT] drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"} (StrOpt) Net options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
[DEFAULT] drbdmanage_resource_options = {"auto-promote-timeout": "300"} (StrOpt) Resource options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details.
[DEFAULT] dsware_isthin = False (BoolOpt) The flag of thin storage allocation.
[DEFAULT] dsware_manager = (StrOpt) Fusionstorage manager ip addr for cinder-volume.
[DEFAULT] enable_unsupported_driver = False (BoolOpt) Set this to True when you want to allow an unsupported driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release.
[DEFAULT] fss_debug = False (BoolOpt) Enable HTTP debugging to FSS
[DEFAULT] fss_pool = (IntOpt) FSS pool id in which FalconStor volumes are stored.
[DEFAULT] fusionstorageagent = (StrOpt) Fusionstorage agent ip addr range.
[DEFAULT] glance_catalog_info = image:glance:publicURL (StrOpt) Info to match when looking for glance in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if glance_api_servers are not provided.
[DEFAULT] group_api_class = cinder.group.api.API (StrOpt) The full class name of the group API class
[DEFAULT] hnas_chap_enabled = True (BoolOpt) Whether the chap authentication is enabled in the iSCSI target or not.
[DEFAULT] hnas_cluster_admin_ip0 = None (StrOpt) The IP of the HNAS cluster admin. Required only for HNAS multi-cluster setups.
[DEFAULT] hnas_mgmt_ip0 = None (IPOpt) Management IP address of HNAS. This can be any IP in the admin address on HNAS or the SMU IP.
[DEFAULT] hnas_password = None (StrOpt) HNAS password.
[DEFAULT] hnas_ssc_cmd = ssc (StrOpt) Command to communicate to HNAS.
[DEFAULT] hnas_ssh_port = 22 (PortOpt) Port to be used for SSH authentication.
[DEFAULT] hnas_ssh_private_key = None (StrOpt) Path to the SSH private key used to authenticate in HNAS SMU.
[DEFAULT] hnas_svc0_hdp = None (StrOpt) Service 0 HDP
[DEFAULT] hnas_svc0_iscsi_ip = None (IPOpt) Service 0 iSCSI IP
[DEFAULT] hnas_svc0_volume_type = None (StrOpt) Service 0 volume type
[DEFAULT] hnas_svc1_hdp = None (StrOpt) Service 1 HDP
[DEFAULT] hnas_svc1_iscsi_ip = None (IPOpt) Service 1 iSCSI IP
[DEFAULT] hnas_svc1_volume_type = None (StrOpt) Service 1 volume type
[DEFAULT] hnas_svc2_hdp = None (StrOpt) Service 2 HDP
[DEFAULT] hnas_svc2_iscsi_ip = None (IPOpt) Service 2 iSCSI IP
[DEFAULT] hnas_svc2_volume_type = None (StrOpt) Service 2 volume type
[DEFAULT] hnas_svc3_hdp = None (StrOpt) Service 3 HDP
[DEFAULT] hnas_svc3_iscsi_ip = None (IPOpt) Service 3 iSCSI IP
[DEFAULT] hnas_svc3_volume_type = None (StrOpt) Service 3 volume type
[DEFAULT] hnas_username = None (StrOpt) HNAS username.
[DEFAULT] kaminario_nodedup_substring = K2-nodedup (StrOpt) If volume-type name contains this substring nodedup volume will be created, otherwise dedup volume wil be created.
[DEFAULT] lvm_suppress_fd_warnings = False (BoolOpt) Suppress leaked file descriptor warnings in LVM commands.
[DEFAULT] message_ttl = 2592000 (IntOpt) message minimum life in seconds.
[DEFAULT] metro_domain_name = None (StrOpt) The remote metro device domain name.
[DEFAULT] metro_san_address = None (StrOpt) The remote metro device request url.
[DEFAULT] metro_san_password = None (StrOpt) The remote metro device san password.
[DEFAULT] metro_san_user = None (StrOpt) The remote metro device san user.
[DEFAULT] metro_storage_pools = None (StrOpt) The remote metro device pool names.
[DEFAULT] nas_host = (StrOpt) IP address or Hostname of NAS system.
[DEFAULT] netapp_replication_aggregate_map = None (MultiOpt) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
[DEFAULT] netapp_snapmirror_quiesce_timeout = 3600 (IntOpt) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover.
[DEFAULT] nexenta_nbd_symlinks_dir = /dev/disk/by-path (StrOpt) NexentaEdge logical path of directory to store symbolic links to NBDs
[DEFAULT] osapi_volume_use_ssl = False (BoolOpt) Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified.
[DEFAULT] pool_id_filter = (ListOpt) Pool id permit to use.
[DEFAULT] pool_type = default (StrOpt) Pool type, like sata-2copy.
[DEFAULT] proxy = storage.proxy.IBMStorageProxy (StrOpt) Proxy driver that connects to the IBM Storage Array
[DEFAULT] quota_groups = 10 (IntOpt) Number of groups allowed per project
[DEFAULT] scaleio_server_certificate_path = None (StrOpt) Server certificate path
[DEFAULT] scaleio_verify_server_certificate = False (BoolOpt) verify server certificate
[DEFAULT] scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler (StrOpt) Which handler to use for selecting the host/pool after weighing
[DEFAULT] secondary_san_ip = (StrOpt) IP address of secondary DSM controller
[DEFAULT] secondary_san_login = Admin (StrOpt) Secondary DSM user name
[DEFAULT] secondary_san_password = (StrOpt) Secondary DSM user password name
[DEFAULT] secondary_sc_api_port = 3033 (PortOpt) Secondary Dell API port
[DEFAULT] sio_max_over_subscription_ratio = 10.0 (FloatOpt) max_over_subscription_ratio setting for the ScaleIO driver. This replaces the general max_over_subscription_ratio which has no effect in this driver.Maximum value allowed for ScaleIO is 10.0.
[DEFAULT] storage_protocol = iscsi (StrOpt) Protocol for transferring data between host and storage back-end.
[DEFAULT] synology_admin_port = 5000 (PortOpt) Management port for Synology storage.
[DEFAULT] synology_device_id = None (StrOpt) Device id for skip one time password check for logging in Synology storage if OTP is enabled.
[DEFAULT] synology_one_time_pass = None (StrOpt) One time password of administrator for logging in Synology storage if OTP is enabled.
[DEFAULT] synology_password = (StrOpt) Password of administrator for logging in Synology storage.
[DEFAULT] synology_pool_name = (StrOpt) Volume on Synology storage to be used for creating lun.
[DEFAULT] synology_ssl_verify = True (BoolOpt) Do certificate validation or not if $driver_use_ssl is True
[DEFAULT] synology_username = admin (StrOpt) Administrator of Synology storage.
[DEFAULT] violin_dedup_capable_pools = (ListOpt) Storage pools capable of dedup and other luns.(Comma separated list)
[DEFAULT] violin_dedup_only_pools = (ListOpt) Storage pools to be used to setup dedup luns only.(Comma separated list)
[DEFAULT] violin_iscsi_target_ips = (ListOpt) Target iSCSI addresses to use.(Comma separated list)
[DEFAULT] violin_pool_allocation_method = random (StrOpt) Method of choosing a storage pool for a lun.
[DEFAULT] vzstorage_default_volume_format = raw (StrOpt) Default format that will be used when creating volumes if no volume format is specified.
[DEFAULT] zadara_default_snap_policy = False (BoolOpt) VPSA - Attach snapshot policy for volumes
[DEFAULT] zadara_password = None (StrOpt) VPSA - Password
[DEFAULT] zadara_use_iser = True (BoolOpt) VPSA - Use ISER instead of iSCSI
[DEFAULT] zadara_user = None (StrOpt) VPSA - Username
[DEFAULT] zadara_vol_encrypt = False (BoolOpt) VPSA - Default encryption policy for volumes
[DEFAULT] zadara_vol_name_template = OS_%s (StrOpt) VPSA - Default template for VPSA volume names
[DEFAULT] zadara_vpsa_host = None (StrOpt) VPSA - Management Host name or IP address
[DEFAULT] zadara_vpsa_poolname = None (StrOpt) VPSA - Storage Pool assigned for volumes
[DEFAULT] zadara_vpsa_port = None (PortOpt) VPSA - Port number
[DEFAULT] zadara_vpsa_use_ssl = False (BoolOpt) VPSA - Use SSL connection
[DEFAULT] zteAheadReadSize = 8 (IntOpt) Cache readahead size.
[DEFAULT] zteCachePolicy = 1 (IntOpt) Cache policy. 0, Write Back; 1, Write Through.
[DEFAULT] zteChunkSize = 4 (IntOpt) Virtual block size of pool. Unit : KB. Valid value : 4, 8, 16, 32, 64, 128, 256, 512.
[DEFAULT] zteControllerIP0 = None (IPOpt) Main controller IP.
[DEFAULT] zteControllerIP1 = None (IPOpt) Slave controller IP.
[DEFAULT] zteLocalIP = None (IPOpt) Local IP.
[DEFAULT] ztePoolVoAllocatedPolicy = 0 (IntOpt) Pool volume allocated policy. 0, Auto; 1, High Performance Tier First; 2, Performance Tier First; 3, Capacity Tier First.
[DEFAULT] ztePoolVolAlarmStopAllocatedFlag = 0 (IntOpt) Pool volume alarm stop allocated flag.
[DEFAULT] ztePoolVolAlarmThreshold = 0 (IntOpt) Pool volume alarm threshold. [0, 100]
[DEFAULT] ztePoolVolInitAllocatedCapacity = 0 (IntOpt) Pool volume init allocated Capacity.Unit : KB.
[DEFAULT] ztePoolVolIsThin = False (IntOpt) Whether it is a thin volume.
[DEFAULT] ztePoolVolMovePolicy = 0 (IntOpt) Pool volume move policy.0, Auto; 1, Highest Available; 2, Lowest Available; 3, No Relocation.
[DEFAULT] zteSSDCacheSwitch = 1 (IntOpt) SSD cache switch. 0, OFF; 1, ON.
[DEFAULT] zteStoragePool = (ListOpt) Pool name list.
[DEFAULT] zteUserName = None (StrOpt) User name.
[DEFAULT] zteUserPassword = None (StrOpt) User password.
[barbican] auth_endpoint = http://localhost:5000/v3 (StrOpt) Use this endpoint to connect to Keystone
[barbican] barbican_api_version = None (StrOpt) Version of the Barbican API, for example: “v1”
[barbican] barbican_endpoint = None (StrOpt) Use this endpoint to connect to Barbican, for example: “http://localhost:9311/
[barbican] number_of_retries = 60 (IntOpt) Number of times to retry poll for key creation completion
[barbican] retry_delay = 1 (IntOpt) Number of seconds to wait before retrying poll for key creation completion
[fc-zone-manager] enable_unsupported_driver = False (BoolOpt) Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release.
[key_manager] api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager (StrOpt) The full class name of the key manager API class
[key_manager] fixed_key = None (StrOpt) Fixed key returned by key manager, specified in hex
New default values
Option Previous default value New default value
[DEFAULT] backup_service_inithost_offload False True
[DEFAULT] datera_num_replicas 1 3
[DEFAULT] default_timeout 525600 31536000
[DEFAULT] glance_api_servers $glance_host:$glance_port None
[DEFAULT] io_port_list * None
[DEFAULT] iscsi_initiators   None
[DEFAULT] naviseccli_path   None
[DEFAULT] nexenta_chunksize 16384 32768
[DEFAULT] query_volume_filters name, status, metadata, availability_zone, bootable name, status, metadata, availability_zone, bootable, group_id
[DEFAULT] vmware_task_poll_interval 0.5 2.0
Deprecated options
Deprecated option New Option
[DEFAULT] enable_v1_api None
[DEFAULT] enable_v2_api None
[DEFAULT] eqlx_chap_login [DEFAULT] chap_username
[DEFAULT] eqlx_chap_password [DEFAULT] chap_password
[DEFAULT] eqlx_use_chap [DEFAULT] use_chap_auth
[DEFAULT] host [DEFAULT] backend_host
[DEFAULT] nas_ip [DEFAULT] nas_host
[DEFAULT] osapi_max_request_body_size [oslo_middleware] max_request_body_size
[DEFAULT] use_syslog None
[hyperv] force_volumeutils_v1 None

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

The Block Storage service works with many different storage drivers that you can configure by using these instructions.

Clustering service

Clustering API configuration

Configuration options

The Clustering API can be configured by changing the following options:

Description of API configuration options
Configuration option = Default value Description
[authentication]  
auth_url = (String) Complete public identity V3 API endpoint.
service_password = (String) Password specified for the Senlin service user.
service_project_domain = Default (String) Name of the domain for the service project.
service_project_name = service (String) Name of the service project.
service_user_domain = Default (String) Name of the domain for the service user.
service_username = senlin (String) Senlin service user name
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_policy]  
policy_default_rule = default (String) Default rule. Enforced when a requested rule is not found.
policy_dirs = ['policy.d'] (Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored.
policy_file = policy.json (String) The JSON file that defines policies.
[revision]  
senlin_api_revision = 1.0 (String) Senlin API revision.
senlin_engine_revision = 1.0 (String) Senlin engine revision.
[senlin_api]  
api_paste_config = api-paste.ini (String) The API paste config file to use.
backlog = 4096 (Integer) Number of backlog requests to configure the socket with.
bind_host = 0.0.0.0 (IP) Address to bind the server. Useful when selecting a particular network interface.
bind_port = 8778 (Port number) The port on which the server will listen.
cert_file = None (String) Location of the SSL certificate file to use for SSL mode.
client_socket_timeout = 900 (Integer) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ indicates waiting forever.
key_file = None (String) Location of the SSL key file to use for enabling SSL mode.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
max_json_body_size = 1048576 (Integer) Maximum raw byte size of JSON request body. Should be larger than max_template_size.
tcp_keepidle = 600 (Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes.
workers = 0 (Integer) Number of workers for Senlin service.
wsgi_keep_alive = True (Boolean) If false, closes the client socket explicitly.

Additional configuration options for Clustering service

These options can also be set in the senlin.conf file.

Description of Common configuration options
Configuration option = Default value Description
[DEFAULT]  
batch_interval = 3 (Integer) Seconds to pause between scheduling two consecutive batches of node actions.
cloud_backend = openstack (String) Default cloud backend to use.
default_action_timeout = 3600 (Integer) Timeout in seconds for actions.
default_region_name = None (String) Default region name used to get services endpoints.
engine_life_check_timeout = 2 (Integer) RPC timeout for the engine liveness check that is used for cluster locking.
environment_dir = /etc/senlin/environments (String) The directory to search for environment files.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
fatal_deprecations = False (Boolean) Enables or disables fatal status of deprecations.
host = localhost (String) Name of the engine node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address.
lock_retry_interval = 10 (Integer) Number of seconds between lock retries.
lock_retry_times = 3 (Integer) Number of times trying to grab a lock.
max_actions_per_batch = 0 (Integer) Maximum number of node actions that each engine worker can schedule consecutively per batch. 0 means no limit.
max_clusters_per_project = 100 (Integer) Maximum number of clusters any one project may have active at one time.
max_nodes_per_cluster = 1000 (Integer) Maximum nodes allowed per top-level cluster.
max_response_size = 524288 (Integer) Maximum raw byte size of data from web response.
name_unique = False (Boolean) Flag to indicate whether to enforce unique names for Senlin objects belonging to the same project.
num_engine_workers = 1 (Integer) Number of senlin-engine processes to fork and run.
periodic_fuzzy_delay = 10 (Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
periodic_interval = 60 (Integer) Seconds between running periodic tasks.
periodic_interval_max = 120 (Integer) Seconds between periodic tasks to be called
publish_errors = False (Boolean) Enables or disables publication of error events.
use_router_proxy = True (Boolean) Use ROUTER remote proxy.
[health_manager]  
nova_control_exchange = nova (String) Exchange name for nova notifications
[oslo_versionedobjects]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal
[webhook]  
host = None (String) Address for invoking webhooks. It is useful for cases where proxies are used for triggering webhooks. Default to the hostname of the API node.
port = 8778 (Port number) The port on which a webhook will be invoked. Useful when service is running behind a proxy.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of Message service configuration options
Configuration option = Default value Description
[zaqar]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
timeout = None (Integer) Timeout value for http requests

New, updated, and deprecated options in Newton for Clustering service

New options
Option = default value (Type) Help string
[DEFAULT] batch_interval = 3 (IntOpt) Seconds to pause between scheduling two consecutive batches of node actions.
[DEFAULT] periodic_fuzzy_delay = 10 (IntOpt) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
[health_manager] nova_control_exchange = nova (StrOpt) Exchange name for nova notifications
[oslo_versionedobjects] fatal_exception_format_errors = False (BoolOpt) Make exception message format errors fatal
[senlin_api] api_paste_config = api-paste.ini (StrOpt) The API paste config file to use.
[senlin_api] client_socket_timeout = 900 (IntOpt) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ indicates waiting forever.
[senlin_api] max_json_body_size = 1048576 (IntOpt) Maximum raw byte size of JSON request body. Should be larger than max_template_size.
[senlin_api] wsgi_keep_alive = True (BoolOpt) If false, closes the client socket explicitly.
[zaqar] auth_section = None (Opt) Config Section from which to load plugin specific options
[zaqar] auth_type = None (Opt) Authentication type to load
[zaqar] cafile = None (StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections.
[zaqar] certfile = None (StrOpt) PEM encoded client certificate cert file
[zaqar] insecure = False (BoolOpt) Verify HTTPS connections.
[zaqar] keyfile = None (StrOpt) PEM encoded client certificate key file
[zaqar] timeout = None (IntOpt) Timeout value for http requests
New default values
Option Previous default value New default value
[DEFAULT] max_actions_per_batch 10 0
[DEFAULT] periodic_interval_max 60 120
[webhook] host localhost None
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None

The Clustering service implements clustering services and libraries for managing groups of homogeneous objects exposed by other OpenStack services. The configuration file for this service is /etc/senlin/senlin.conf.

Note

The common configurations for shared services and libraries, such as database connections and RPC messaging, are described at Common configurations.

Compute service

The Compute service is a cloud computing fabric controller, which is the main part of an Infrastructure as a Service (IaaS) system. You can use OpenStack Compute to host and manage cloud computing systems. This section describes the Compute service configuration options.

To configure your Compute installation, you must define configuration options in these files:

  • nova.conf contains most of the Compute configuration options and resides in the /etc/nova directory.
  • api-paste.ini defines Compute limits and resides in the /etc/nova directory.
  • Related Image service and Identity service management configuration files.

For a quick overview:

nova.conf - configuration options

For a complete list of all available configuration options for each OpenStack Compute service, run bin/nova-<servicename> --help.

Description of API database configuration options
Configuration option = Default value Description
[api_database]  
connection = None (String) No help text available for this option.
connection_debug = 0 (Integer) No help text available for this option.
connection_trace = False (Boolean) No help text available for this option.
idle_timeout = 3600 (Integer) No help text available for this option.
max_overflow = None (Integer) No help text available for this option.
max_pool_size = None (Integer) No help text available for this option.
max_retries = 10 (Integer) No help text available for this option.
mysql_sql_mode = TRADITIONAL (String) No help text available for this option.
pool_timeout = None (Integer) No help text available for this option.
retry_interval = 10 (Integer) No help text available for this option.
slave_connection = None (String) No help text available for this option.
sqlite_synchronous = True (Boolean) No help text available for this option.
Description of authentication configuration options
Configuration option = Default value Description
[DEFAULT]  
auth_strategy = keystone (String) This determines the strategy to use for authentication: keystone or noauth2. ‘noauth2’ is designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username.
Description of availability zones configuration options
Configuration option = Default value Description
[DEFAULT]  
default_availability_zone = nova

(String) Default compute node availability_zone.

This option determines the availability zone to be used when it is not specified in the VM creation request. If this option is not set, the default availability zone ‘nova’ is used.

Possible values:

  • Any string representing an availability zone name * ‘nova’ is the default value
default_schedule_zone = None

(String) Availability zone to use when user doesn’t specify one.

This option is used by the scheduler to determine which availability zone to place a new VM instance into if the user did not specify one at the time of VM boot request.

Possible values:

  • Any string representing an availability zone name
  • Default value is None.
internal_service_availability_zone = internal

(String) This option specifies the name of the availability zone for the internal services. Services like nova-scheduler, nova-network, nova-conductor are internal services. These services will appear in their own internal availability_zone.

Possible values:

  • Any string representing an availability zone name * ‘internal’ is the default value
Description of Barbican configuration options
Configuration option = Default value Description
[barbican]  
auth_endpoint = http://localhost:5000/v3 (String) Use this endpoint to connect to Keystone
barbican_api_version = None (String) Version of the Barbican API, for example: “v1”
barbican_endpoint = None (String) Use this endpoint to connect to Barbican, for example: “http://localhost:9311/
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
catalog_info = key-manager:barbican:public (String) DEPRECATED: Info to match when looking for barbican in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> This option have been moved to the Castellan library
certfile = None (String) PEM encoded client certificate cert file
endpoint_template = None (String) DEPRECATED: Override service catalog lookup with template for barbican endpoint e.g. http://localhost:9311/v1/%(project_id)s This option have been moved to the Castellan library
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
number_of_retries = 60 (Integer) Number of times to retry poll for key creation completion
os_region_name = None (String) DEPRECATED: Region name of this node This option have been moved to the Castellan library
retry_delay = 1 (Integer) Number of seconds to wait before retrying poll for key creation completion
timeout = None (Integer) Timeout value for http requests
Description of cell configuration options
Configuration option = Default value Description
[cells]  
call_timeout = 60

(Integer) Call timeout

Cell messaging module waits for response(s) to be put into the eventlet queue. This option defines the seconds waited for response from a call to a cell.

Possible values:

  • Time in seconds.
capabilities = hypervisor=xenserver;kvm, os=linux;windows

(List) Cell capabilities

List of arbitrary key=value pairs defining capabilities of the current cell to be sent to the parent cells. These capabilities are intended to be used in cells scheduler filters/weighers.

Possible values:

  • key=value pairs list for example; hypervisor=xenserver;kvm,os=linux;windows
cell_type = compute

(String) Type of cell

When cells feature is enabled the hosts in the OpenStack Compute cloud are partitioned into groups. Cells are configured as a tree. The top-level cell’s cell_type must be set to api. All other cells are defined as a compute cell by default.

Related options:

  • compute_api_class: This option must be set to cells api driver for the top-level cell (nova.compute.cells_api.ComputeCellsAPI)
  • quota_driver: Disable quota checking for the child cells. (nova.quota.NoopQuotaDriver)
cells_config = None

(String) Optional cells configuration

Configuration file from which to read cells configuration. If given, overrides reading cells from the database.

Cells store all inter-cell communication data, including user names and passwords, in the database. Because the cells data is not updated very frequently, use this option to specify a JSON file to store cells data. With this configuration, the database is no longer consulted when reloading the cells data. The file must have columns present in the Cell model (excluding common database fields and the id column). You must specify the queue connection information through a transport_url field, instead of username, password, and so on.

The transport_url has the following form: rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST

Possible values:

The scheme can be either qpid or rabbit, the following sample shows this optional configuration:

{ “parent”: { “name”: “parent”, “api_url”: “http://api.example.com:8774”, “transport_url”: “rabbit://rabbit.example.com”, “weight_offset”: 0.0, “weight_scale”: 1.0, “is_parent”: true }, “cell1”: { “name”: “cell1”, “api_url”: “http://api.example.com:8774”, “transport_url”: “rabbit://rabbit1.example.com”, “weight_offset”: 0.0, “weight_scale”: 1.0, “is_parent”: false }, “cell2”: { “name”: “cell2”, “api_url”: “http://api.example.com:8774”, “transport_url”: “rabbit://rabbit2.example.com”, “weight_offset”: 0.0, “weight_scale”: 1.0, “is_parent”: false } }
db_check_interval = 60

(Integer) DB check interval

Cell state manager updates cell status for all cells from the DB only after this particular interval time is passed. Otherwise cached status are used. If this value is 0 or negative all cell status are updated from the DB whenever a state is needed.

Possible values:

  • Interval time, in seconds.
driver = nova.cells.rpc_driver.CellsRPCDriver

(String) DEPRECATED: Cells communication driver

Driver for cell<->cell communication via RPC. This is used to setup the RPC consumers as well as to send a message to another cell. ‘nova.cells.rpc_driver.CellsRPCDriver’ starts up 2 separate servers for handling inter-cell communication via RPC. The only available driver is the RPC driver.

enable = False

(Boolean) Enable cell functionality

When this functionality is enabled, it lets you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker.

Related options:

  • name: A unique cell name must be given when this functionality is enabled.
  • cell_type: Cell type should be defined for all cells.
instance_update_num_instances = 1

(Integer) Instance update num instances

On every run of the periodic task, nova cells manager will attempt to sync instance_updated_at_threshold number of instances. When the manager gets the list of instances, it shuffles them so that multiple nova-cells services do not attempt to sync the same instances in lockstep.

Possible values:

  • Positive integer number

Related options:

  • This value is used with the instance_updated_at_threshold value in a periodic task run.
instance_update_sync_database_limit = 100

(Integer) Instance update sync database limit

Number of instances to pull from the database at one time for a sync. If there are more instances to update the results will be paged through.

Possible values:

  • Number of instances.
instance_updated_at_threshold = 3600

(Integer) Instance updated at threshold

Number of seconds after an instance was updated or deleted to continue to update cells. This option lets cells manager to only attempt to sync instances that have been updated recently. i.e., a threshold of 3600 means to only update instances that have modified in the last hour.

Possible values:

  • Threshold in seconds

Related options:

  • This value is used with the instance_update_num_instances value in a periodic task run.
max_hop_count = 10

(Integer) Maximum hop count

When processing a targeted message, if the local cell is not the target, a route is defined between neighbouring cells. And the message is processed across the whole routing path. This option defines the maximum hop counts until reaching the target.

Possible values:

  • Positive integer value
mute_child_interval = 300

(Integer) Mute child interval

Number of seconds after which a lack of capability and capacity update the child cell is to be treated as a mute cell. Then the child cell will be weighed as recommend highly that it be skipped.

Possible values:

  • Time in seconds.
mute_weight_multiplier = -10000.0

(Floating point) Mute weight multiplier

Multiplier used to weigh mute children. Mute children cells are recommended to be skipped so their weight is multiplied by this negative value.

Possible values:

  • Negative numeric number
name = nova

(String) Name of the current cell

This value must be unique for each cell. Name of a cell is used as its id, leaving this option unset or setting the same name for two or more cells may cause unexpected behaviour.

Related options:

  • enabled: This option is meaningful only when cells service is enabled
offset_weight_multiplier = 1.0

(Floating point) Offset weight multiplier

Multiplier used to weigh offset weigher. Cells with higher weight_offsets in the DB will be preferred. The weight_offset is a property of a cell stored in the database. It can be used by a deployer to have scheduling decisions favor or disfavor cells based on the setting.

Possible values:

  • Numeric multiplier
reserve_percent = 10.0

(Floating point) Reserve percentage

Percentage of cell capacity to hold in reserve, so the minimum amount of free resource is considered to be; min_free = total * (reserve_percent / 100.0) This option affects both memory and disk utilization. The primary purpose of this reserve is to ensure some space is available for users who want to resize their instance to be larger. Note that currently once the capacity expands into this reserve space this option is ignored.

rpc_driver_queue_base = cells.intercell

(String) RPC driver queue base

When sending a message to another cell by JSON-ifying the message and making an RPC cast to ‘process_message’, a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this.

Possible values:

  • The base queue name to be used when communicating between cells.
topic = cells

(String) Topic

This is the message queue topic that cells nodes listen on. It is used when the cells service is started up to configure the queue, and whenever an RPC call to the scheduler is made.

Possible values:

  • cells: This is the recommended and the default value.
Description of cloudpipe configuration options
Configuration option = Default value Description
[cloudpipe]  
boot_script_template = $pybasedir/nova/cloudpipe/bootscript.template

(String) Template for cloudpipe instance boot script.

Possible values:

  • Any valid path to a cloudpipe instance boot script template

Related options:

The following options are required to configure cloudpipe-managed OpenVPN server.

  • dmz_net
  • dmz_mask
  • cnt_vpn_clients
dmz_mask = 255.255.255.0

(IP) Netmask to push into OpenVPN config.

Possible values:

  • Any valid IPv4/IPV6 netmask

Related options:

  • dmz_net - dmz_net and dmz_mask is pushed into bootscript.template to configure cloudpipe-managed OpenVPN server
  • boot_script_template
dmz_net = 10.0.0.0

(IP) Network to push into OpenVPN config.

Note: Above mentioned OpenVPN config can be found at /etc/openvpn/server.conf.

Possible values:

  • Any valid IPv4/IPV6 address

Related options:

  • boot_script_template - dmz_net is pushed into bootscript.template to configure cloudpipe-managed OpenVPN server
vpn_flavor = m1.tiny

(String) Flavor for VPN instances.

Possible values:

  • Any valid flavor name
vpn_image_id = 0

(String) Image ID used when starting up a cloudpipe VPN client.

An empty instance is created and configured with OpenVPN using boot_script_template. This instance would be snapshotted and stored in glance. ID of the stored image is used in ‘vpn_image_id’ to create cloudpipe VPN client.

Possible values:

  • Any valid ID of a VPN image
vpn_key_suffix = -vpn

(String) Suffix to add to project name for VPN key and secgroups

Possible values:

  • Any string value representing the VPN key suffix
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
bindir = /usr/local/bin

(String) The directory where the Nova binaries are installed.

This option is only relevant if the networking capabilities from Nova are used (see services below). Nova’s networking capabilities are targeted to be fully replaced by Neutron in the future. It is very unlikely that you need to change this option from its default value.

Possible values:

  • The full path to a directory.
compute_topic = compute

(String) This is the message queue topic that the compute service ‘listens’ on. It is used when the compute service is started up to configure the queue, and whenever an RPC call to the compute service is made.

  • Possible values:
Any string, but there is almost never any reason to ever change this value from its default of ‘compute’.
  • Services that use this:
nova-compute
  • Related options:
None
console_topic = console

(String) Represents the message queue topic name used by nova-console service when communicating via the AMQP server. The Nova API uses a message queue to communicate with nova-console to retrieve a console URL for that host.

Possible values

  • ‘console’ (default) or any string representing topic exchange name.
consoleauth_topic = consoleauth

(String) This option allows you to change the message topic used by nova-consoleauth service when communicating via the AMQP server. Nova Console Authentication server authenticates nova consoles. Users can then access their instances through VNC clients. The Nova API service uses a message queue to communicate with nova-consoleauth to get a VNC console.

Possible Values:

  • ‘consoleauth’ (default) or Any string representing topic exchange name.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
fatal_exception_format_errors = False

(Boolean) DEPRECATED: When set to true, this option enables validation of exception message format.

This option is used to detect errors in NovaException class when it formats error messages. If True, raise an exception; if False, use the unformatted message. This is only used for internal testing.

host = localhost

(String) Hostname, FQDN or IP address of this host. Must be valid within AMQP key.

Possible values:

  • String with hostname, FQDN or IP address. Default is hostname of this host.
my_ip = 10.0.0.1

(String) The IP address which the host is using to connect to the management network.

Possible values:

  • String with valid IP address. Default is IPv4 address of this host.

Related options:

  • metadata_host
  • my_block_storage_ip
  • routing_source_ip
  • vpn_ip
notify_api_faults = False (Boolean) If enabled, send api.fault notifications on caught exceptions in the API service.
notify_on_state_change = None

(String) If set, send compute.instance.update notifications on instance state changes.

Please refer to https://wiki.openstack.org/wiki/SystemUsageData for additional information on notifications.

Possible values:

  • None - no notifications
  • “vm_state” - notifications on VM state changes
  • “vm_and_task_state” - notifications on VM and task state changes
pybasedir = /usr/lib/python/site-packages/nova

(String) The directory where the Nova python modules are installed.

This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value.

Possible values:

  • The full path to a directory.

Related options:

  • state_path
report_interval = 10 (Integer) Seconds between nodes reporting state to datastore
rootwrap_config = /etc/nova/rootwrap.conf

(String) Path to the rootwrap configuration file.

Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry.

service_down_time = 60 (Integer) Maximum time since last check-in for up service
state_path = $pybasedir

(String) The top-level directory for maintaining Nova’s state.

This directory is used to store Nova’s internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option instances_path gets overwritten, this directory can grow very large.

Possible values:

  • The full path to a directory. Defaults to value provided in pybasedir.
tempdir = None (String) Explicitly specify the temporary working directory.
use_rootwrap_daemon = False (Boolean) Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes.
[workarounds]  
disable_libvirt_livesnapshot = True

(Boolean) Disable live snapshots when using the libvirt driver.

Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem.

When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process.

For more information, refer to the bug report:

Possible values:

  • True: Live snapshot is disabled when using libvirt
  • False: Live snapshots are always used when snapshotting (as long as there is a new enough libvirt and the backend storage supports it)
disable_rootwrap = False

(Boolean) Use sudo instead of rootwrap.

Allow fallback to sudo for performance reasons.

For more information, refer to the bug report:

Possible values:

  • True: Use sudo instead of rootwrap
  • False: Use rootwrap as usual

Interdependencies to other options:

  • Any options that affect ‘rootwrap’ will be ignored.
handle_virt_lifecycle_events = True

(Boolean) Enable handling of events emitted from compute drivers.

Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored.

This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature.

Care should be taken when this feature is disabled and ‘sync_power_state_interval’ is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.

For more information, refer to the bug report:

Interdependencies to other options:

  • If sync_power_state_interval is negative and this feature is disabled, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually.
Description of Compute configuration options
Configuration option = Default value Description
[DEFAULT]  
compute_available_monitors = None (Multi-valued) DEPRECATED: Monitor classes available to the compute which may be specified more than once. This option is DEPRECATED and no longer used. Use setuptools entry points to list available monitor plugins. stevedore and setuptools entry points now allow a set of plugins to be specified without this config option.
compute_driver = None

(String) Defines which driver to use for controlling virtualization.

Possible values:

  • libvirt.LibvirtDriver
  • xenapi.XenAPIDriver
  • fake.FakeDriver
  • ironic.IronicDriver
  • vmwareapi.VMwareVCDriver
  • hyperv.HyperVDriver
compute_manager = nova.compute.manager.ComputeManager (String) DEPRECATED: Full class name for the Manager for compute
compute_monitors =

(List) A list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the “cpu.” namespace is assumed for backwards-compatibility.

Possible values:

  • An empty list will disable the feature(Default).
  • An example value that would enable both the CPU and NUMA memory bandwidth monitors that used the virt driver variant: [“cpu.virt_driver”, “numa_mem_bw.virt_driver”]
compute_stats_class = nova.compute.stats.Stats

(String) DEPRECATED: Abstracts out managing compute host stats to pluggable class. This class manages and updates stats for the local compute host after an instance is changed. These configurable compute stats may be useful for a particular scheduler implementation.

Possible values

  • A string representing fully qualified class name.
console_host = socket.gethostname()

(String) Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host.

Possible values:

  • Current hostname (default) or any string representing hostname.
console_manager = nova.console.manager.ConsoleProxyManager (String) DEPRECATED: Full class name for the Manager for console proxy
default_flavor = m1.small (String) DEPRECATED: Default flavor to use for the EC2 API only. The Nova API does not support a default flavor. The EC2 API is deprecated
default_notification_level = INFO (String) Default notification level for outgoing notifications.
enable_instance_password = True (Boolean) Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False.
heal_instance_info_cache_interval = 60 (Integer) Number of seconds between instance network information cache updates
image_cache_manager_interval = 2400 (Integer) Number of seconds to wait between runs of the image cache manager. Set to -1 to disable. Setting this to 0 will run at the default rate.
image_cache_subdirectory_name = _base (String) Where cached images are stored under $instances_path. This is NOT the full path - just a folder name. For per-compute-host cached images, set to _base_$my_ip
instance_build_timeout = 0

(Integer) Maximum time in seconds that an instance can take to build.

If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.
instance_delete_interval = 300 (Integer) Interval in seconds for retrying failed instance file deletes. Set to -1 to disable. Setting this to 0 will run at the default rate.
instance_usage_audit = False (Boolean) This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service.
instance_usage_audit_period = month

(String) Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset.

Possible values: * period, example: hour, day, month` or ``year * period with offset, example: month@15 will result in monthly audits starting on 15th day of month.

instances_path = $state_path/instances

(String) Specifies where instances are stored on the hypervisor’s disk. It can point to locally attached storage or a directory on NFS.

Possible values:

  • $state_path/instances where state_path is a config option that specifies the top-level directory for maintaining nova’s state. (default) or Any string representing directory path.
max_concurrent_builds = 10

(Integer) Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node.

Possible Values:

  • 0 : treated as unlimited.
  • Any positive integer representing maximum concurrent builds.
maximum_instance_delete_attempts = 5 (Integer) The number of times to attempt to reap an instance’s files.
reboot_timeout = 0

(Integer) Time interval after which an instance is hard rebooted automatically.

When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds.

Possible values:

  • 0: Disables the option (default).
  • Any positive integer in seconds: Enables the option.
reclaim_instance_interval = 0 (Integer) Interval in seconds for reclaiming deleted instances. It takes effect only when value is greater than 0.
rescue_timeout = 0

(Integer) Interval to wait before un-rescuing an instance stuck in RESCUE.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.
resize_confirm_window = 0

(Integer) Automatically confirm resizes after N seconds.

Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time.

Possible values:

  • 0: Disables the option (default)
  • Any positive integer in seconds: Enables the option.
resume_guests_state_on_host_boot = False (Boolean) This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts.
running_deleted_instance_action = reap

(String) The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified.

Possible values:

  • reap: Powers down the instances and deletes them(default)
  • log: Logs warning message about deletion of the resource
  • shutdown: Powers down instances and marks them as non- bootable which can be later used for debugging/analysis
  • noop: Takes no action

Related options:

  • running_deleted_instance_poll
  • running_deleted_instance_timeout
running_deleted_instance_poll_interval = 1800

(Integer) Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If “running_deleted_instance _action” is set to “log” or “reap”, a value greater than 0 must be set.

Possible values:

  • Any positive integer in seconds enables the option.
  • 0: Disables the option.
  • 1800: Default value.

Related options:

  • running_deleted_instance_action
running_deleted_instance_timeout = 0

(Integer) Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup.

Possible values:

  • Any positive integer in seconds(default is 0).

Related options:

  • “running_deleted_instance_action”
shelved_offload_time = 0 (Integer) Time in seconds before a shelved instance is eligible for removing from a host. -1 never offload, 0 offload immediately when shelved
shelved_poll_interval = 3600 (Integer) Interval in seconds for polling shelved instances to offload. Set to -1 to disable.Setting this to 0 will run at the default rate.
shutdown_timeout = 60

(Integer) Total time to wait in seconds for an instance toperform a clean shutdown.

It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds.

The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly.

Possible values:

  • Any positive integer in seconds (default value is 60).
sync_power_state_interval = 600 (Integer) Interval to sync power states between the database and the hypervisor. Set to -1 to disable. Setting this to 0 will run at the default rate.
sync_power_state_pool_size = 1000

(Integer) Number of greenthreads available for use to sync power states.

This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic.

Possible values:

  • Any positive integer representing greenthreads count.
update_resources_interval = 0 (Integer) Interval in seconds for updating compute resources. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds.
vif_plugging_is_fatal = True

(Boolean) Determine if instance should boot or fail on VIF plugging timeout.

Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval.

This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready.

Possible values:

  • True: Instances should fail after VIF plugging timeout
  • False: Instances should continue booting after VIF plugging timeout
vif_plugging_timeout = 300

(Integer) Timeout for Neutron VIF plugging event message arrival.

Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see ‘vif_plugging_is_fatal’).

Interdependencies to other options:

  • vif_plugging_is_fatal - If vif_plugging_timeout is set to zero and vif_plugging_is_fatal is False, events should not be expected to arrive at all.
Description of conductor configuration options
Configuration option = Default value Description
[DEFAULT]  
migrate_max_retries = -1

(Integer) Number of times to retry live-migration before failing.

Possible values:

  • If == -1, try until out of hosts (default)
  • If == 0, only try once, no retries
  • Integer greater than 0
[conductor]  
manager = nova.conductor.manager.ConductorManager

(String) DEPRECATED: Full class name for the Manager for conductor.

Removal in 14.0

topic = conductor (String) Topic exchange name on which conductor nodes listen.
use_local = False

(Boolean) DEPRECATED: Perform nova-conductor operations locally. This legacy mode was introduced to bridge a gap during the transition to the conductor service. It no longer represents a reasonable alternative for deployers.

Removal may be as early as 14.0.

workers = None (Integer) Number of workers for OpenStack Conductor service. The default will be the number of CPUs available.
Description of config drive configuration options
Configuration option = Default value Description
[DEFAULT]  
config_drive_format = iso9660

(String) Configuration drive format

Configuration drive format that will contain metadata attached to the instance when it boots.

Possible values:

  • iso9660: A file system image standard that is widely supported across operating systems. NOTE: Mind the libvirt bug (https://bugs.launchpad.net/nova/+bug/1246201) - If your hypervisor driver is libvirt, and you want live migrate to work without shared storage, then use VFAT.
  • vfat: For legacy reasons, you can configure the configuration drive to use VFAT format instead of ISO 9660.

Related options:

  • This option is meaningful when one of the following alternatives occur: 1. force_config_drive option set to ‘true’ 2. the REST API call to create the instance contains an enable flag for config drive option 3. the image used to create the instance requires a config drive, this is defined by img_config_drive property for that image.
  • A compute node running Hyper-V hypervisor can be configured to attach configuration drive as a CD drive. To attach the configuration drive as a CD drive, set config_drive_cdrom option at hyperv section, to true.
config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01

(String) When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don’t appear in this option. As of the Liberty release, the available versions are:

  • 1.0
  • 2007-01-19
  • 2007-03-01
  • 2007-08-29
  • 2007-10-10
  • 2007-12-15
  • 2008-02-01
  • 2008-09-01
  • 2009-04-04

The option is in the format of a single string, with each version separated by a space.

Possible values:

  • Any string that represents zero or more versions, separated by spaces.
force_config_drive = False

(Boolean) Force injection to take place on a config drive

When this option is set to true configuration drive functionality will be forced enabled by default, otherwise user can still enable configuration drives via the REST API or image metadata properties.

Possible values:

  • True: Force to use of configuration drive regardless the user’s input in the REST API call.
  • False: Do not force use of configuration drive. Config drives can still be enabled via the REST API or image metadata properties.

Related options:

  • Use the ‘mkisofs_cmd’ flag to set the path where you install the genisoimage program. If genisoimage is in same path as the nova-compute service, you do not need to set this flag.
  • To use configuration drive with Hyper-V, you must set the ‘mkisofs_cmd’ value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.
mkisofs_cmd = genisoimage

(String) Name or path of the tool used for ISO image creation

Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is on the system path, you do not need to change the default value.

To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.

Possible values:

  • Name of the ISO image creator program, in case it is in the same directory as the nova-compute service
  • Path to ISO image creator program

Related options:

  • This option is meaningful when config drives are enabled.
  • To use configuration drive with Hyper-V, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation.
[hyperv]  
config_drive_cdrom = False

(Boolean) Configuration drive cdrom

OpenStack can be configured to write instance metadata to a configuration drive, which is then attached to the instance before it boots. The configuration drive can be attached as a disk drive (default) or as a CD drive.

Possible values:

  • True: Attach the configuration drive image as a CD drive.
  • False: Attach the configuration drive image as a disk drive (Default).

Related options:

  • This option is meaningful with force_config_drive option set to ‘True’ or when the REST API call to create an instance will have ‘–config-drive=True’ flag.
  • config_drive_format option must be set to ‘iso9660’ in order to use CD drive as the configuration drive image.
  • To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value to the full path to an qemu-img command installation.
  • You can configure the Compute service to always create a configuration drive by setting the force_config_drive option to ‘True’.
config_drive_inject_password = False

(Boolean) Configuration drive inject password

Enables setting the admin password in the configuration drive image.

Related options:

  • This option is meaningful when used with other options that enable configuration drive usage with Hyper-V, such as force_config_drive.
  • Currently, the only accepted config_drive_format is ‘iso9660’.
Description of console configuration options
Configuration option = Default value Description
[DEFAULT]  
console_allowed_origins =

(List) Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header.

Possible values

  • An empty list (default) or list of allowed origin hostnames.
console_public_hostname = localhost

(String) Publicly visible name for this console host.

Possible values

  • Current hostname (default) or any string representing hostname.
console_token_ttl = 600 (Integer) This option indicates the lifetime of a console auth token. A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted.
consoleauth_manager = nova.consoleauth.manager.ConsoleAuthManager (String) DEPRECATED: Manager for console auth
[mks]  
enabled = False (Boolean) Enables graphical console access for virtual machines.
mksproxy_base_url = http://127.0.0.1:6090/

(String) Location of MKS web console proxy

The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured

Possible values:

  • Must be a valid URL of the form:http://host:port/
Description of crypt configuration options
Configuration option = Default value Description
[crypto]  
ca_file = cacert.pem

(String) Filename of root CA (Certificate Authority). This is a container format and includes root certificates.

Possible values:

  • Any file name containing root CA, cacert.pem is default

Related options:

  • ca_path
ca_path = $state_path/CA

(String) Directory path where root CA is located.

Related options:

  • ca_file
crl_file = crl.pem

(String) Filename of root Certificate Revocation List (CRL). This is a list of certificates that have been revoked, and therefore, entities presenting those (revoked) certificates should no longer be trusted.

Related options:

  • ca_path
key_file = private/cakey.pem

(String) Filename of a private key.

Related options:

  • keys_path
keys_path = $state_path/keys

(String) Directory path where keys are located.

Related options:

  • key_file
project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s (String) Subject for certificate for projects, %s for project, timestamp
use_project_ca = False (Boolean) Option to enable/disable use of CA for each project.
user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s (String) Subject for certificate for users, %s for project, user, timestamp
[ssl]  
ca_file = None (String) CA certificate file to use to verify connecting clients.
cert_file = None (String) Certificate file to use when starting the server securely.
ciphers = None (String) Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format.
key_file = None (String) Private key file to use when starting the server securely.
version = None (String) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
Description of logging configuration options
Configuration option = Default value Description
[guestfs]  
debug = False

(Boolean) Enable/disables guestfs logging.

This configures guestfs to debug messages and push them to Openstack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, “libguestfs” package must be installed.

Related options: Since libguestfs access and modifies VM’s managed by libvirt, below options should be set to give access to those VM’s. * libvirt.inject_key * libvirt.inject_partition * libvirt.inject_password

[remote_debug]  
host = None

(String) Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host.

Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.

Possible Values:

  • IP address of a remote host as a command line parameter to a nova service. For Example:

/usr/local/bin/nova-compute –config-file /etc/nova/nova.conf –remote_debug-host <IP address where the debugger is running>

port = None

(Port number) Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host.

Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk.

Possible Values:

  • Port number you want to use as a command line parameter to a nova service. For Example:

/usr/local/bin/nova-compute –config-file /etc/nova/nova.conf –remote_debug-host <IP address where the debugger is running> –remote_debug-port <port> it’s listening on>.

Description of ephemeral storage encryption configuration options
Configuration option = Default value Description
[ephemeral_storage_encryption]  
cipher = aes-xts-plain64

(String) Cipher-mode string to be used

The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support.

Possible values:

  • aes-xts-plain64 (Default), see /proc/crypto for available options.
enabled = False (Boolean) Enables/disables LVM ephemeral storage encryption.
key_size = 512

(Integer) Encryption key length in bits

The bit length of the encryption key to be used to encrypt ephemeral storage (in XTS mode only half of the bits are used for encryption key).

Description of fping configuration options
Configuration option = Default value Description
[DEFAULT]  
fping_path = /usr/sbin/fping (String) The full path to the fping binary.
Description of glance configuration options
Configuration option = Default value Description
[DEFAULT]  
osapi_glance_link_prefix = None

(String) This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged.

Possible values:

  • Any string, including an empty string (the default).
[glance]  
allowed_direct_url_schemes = (List) A list of url scheme that can be downloaded directly via the direct_url. Currently supported schemes: [file].
api_insecure = False (Boolean) Allow to perform insecure SSL (https) requests to glance
api_servers = None (List) A list of the glance api servers endpoints available to nova. These should be fully qualified urls of the form “scheme://hostname:port[/path]” (i.e. “http://10.0.1.0:9292” or “https://my.glance.server/image”)
debug = False (Boolean) Enable or disable debug logging with glanceclient.
num_retries = 0 (Integer) Number of retries when uploading / downloading an image to / from glance.
use_glance_v1 = False

(Boolean) DEPRECATED: This flag allows reverting to glance v1 if for some reason glance v2 doesn’t work in your environment. This will only exist in Newton, and a fully working Glance v2 will be a hard requirement in Ocata.

  • Possible values:
True or False
  • Services that use this:
nova-api nova-compute nova-conductor
  • Related options:
None Glance v1 support will be removed in Ocata
verify_glance_signatures = False (Boolean) Require Nova to perform signature verification on each image downloaded from Glance.
[image_file_url]  
filesystems = (List) DEPRECATED: List of file systems that are configured in this file in the image_file_url:<list entry name> sections The feature to download images from glance via filesystem is not used and will be removed in the future.
Description of HyperV configuration options
Configuration option = Default value Description
[hyperv]  
dynamic_memory_ratio = 1.0

(Floating point) Dynamic memory ratio

Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup.

Possible values:

  • 1.0: Disables dynamic memory allocation (Default).
  • Float values greater than 1.0: Enables allocation of total implied RAM divided by this value for startup.
enable_instance_metrics_collection = False

(Boolean) Enable instance metrics collection

Enables metrics collections for an instance by using Hyper-V’s metric APIs. Collected data can by retrieved by other apps and services, e.g.: Ceilometer.

enable_remotefx = False

(Boolean) Enable RemoteFX feature

This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled.

Instances with RemoteFX can be requested with the following flavor extra specs:

os:resolution. Guest VM screen resolution size. Acceptable values:

1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160

3840x2160 is only available on Windows / Hyper-V Server 2016.

os:monitors. Guest VM number of monitors. Acceptable values:

[1, 4] - Windows / Hyper-V Server 2012 R2 [1, 8] - Windows / Hyper-V Server 2016

os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:

64, 128, 256, 512, 1024
instances_path_share =

(String) Instances path share

The name of a Windows share mapped to the “instances_path” dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same “instances_path” used locally.

Possible values:

  • “”: An administrative share will be used (Default).
  • Name of a Windows share.

Related options:

  • “instances_path”: The directory which will be used if this option here is left blank.
limit_cpu_features = False

(Boolean) Limit CPU features

This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance.

mounted_disk_query_retry_count = 10

(Integer) Mounted disk query retry count

The number of times to retry checking for a disk mounted via iSCSI. During long stress runs the WMI query that is looking for the iSCSI device number can incorrectly return no data. If the query is retried the appropriate data can then be obtained. The query runs until the device can be found or the retry count is reached.

Possible values:

  • Positive integer values. Values greater than 1 is recommended (Default: 10).

Related options:

  • Time interval between disk mount retries is declared with “mounted_disk_query_retry_interval” option.
mounted_disk_query_retry_interval = 5

(Integer) Mounted disk query retry interval

Interval between checks for a mounted iSCSI disk, in seconds.

Possible values:

  • Time in seconds (Default: 5).

Related options:

  • This option is meaningful when the mounted_disk_query_retry_count is greater than 1.
  • The retry loop runs with mounted_disk_query_retry_count and mounted_disk_query_retry_interval configuration options.
power_state_check_timeframe = 60

(Integer) Power state check timeframe

The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe.

Possible values:

  • Timeframe in seconds (Default: 60).
power_state_event_polling_interval = 2

(Integer) Power state event polling interval

Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value.

Possible values:

  • Time in seconds (Default: 2).
qemu_img_cmd = qemu-img.exe

(String) qemu-img command

qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: (http://qemu.weilnetz.de/) or you can install the Cloudbase OpenStack Hyper-V Compute Driver (https://cloudbase.it/openstack-hyperv-driver/) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value.

Possible values:

  • Name of the qemu-img executable, in case it is in the same directory as the nova-compute service or its path is in the PATH environment variable (Default).
  • Path of qemu-img command (DRIVELETTER:PATHTOQEMU-IMGCOMMAND).

Related options:

  • If the config_drive_cdrom option is False, qemu-img will be used to convert the ISO to a VHD, otherwise the configuration drive will remain an ISO. To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation.
vswitch_name = None

(String) External virtual switch name

The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private).

Possible values:

  • If not provided, the first of a list of available vswitches is used. This list is queried using WQL.
  • Virtual switch name.
wait_soft_reboot_seconds = 60

(Integer) Wait soft reboot seconds

Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.

Possible values:

  • Time in seconds (Default: 60).
Description of hypervisor configuration options
Configuration option = Default value Description
[DEFAULT]  
default_ephemeral_format = None

(String) The default format an ephemeral_volume will be formatted with on creation.

Possible values:

  • ext2
  • ext3
  • ext4
  • xfs
  • ntfs (only for Windows guests)
force_raw_images = True

(Boolean) Force conversion of backing images to raw format.

Possible values:

  • True: Backing image files will be converted to raw image format
  • False: Backing image files will not be converted

Interdependencies to other options:

  • compute_driver: Only the libvirt driver uses this option.
pointer_model = usbtablet

(String) Generic property to specify the pointer type.

Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement.

If set, the ‘hw_pointer_model’ image property takes precedence over this configuration option.

Possible values:

  • None: Uses default behavior provided by drivers (mouse on PS2 for libvirt x86)
  • ps2mouse: Uses relative movement. Mouse connected by PS2
  • usbtablet: Uses absolute movement. Tablet connect by USB

Interdependencies to other options:

  • usbtablet must be configured with VNC enabled or SPICE enabled and SPICE agent disabled. When used with libvirt the instance mode should be configured as HVM.
preallocate_images = none

(String) The image preallocation mode to use. Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn’t available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation.

Possible values:

  • “none” => no storage provisioning is done up front
  • “space” => storage is fully allocated at instance start
timeout_nbd = 10 (Integer) Amount of time, in seconds, to wait for NBD device start up.
use_cow_images = True

(Boolean) Enable use of copy-on-write (cow) images.

QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used.

vcpu_pin_set = None

(String) Defines which physical CPUs (pCPUs) can be used by instance virtual CPUs (vCPUs).

Possible values:

  • A comma-separated list of physical CPU numbers that virtual CPUs can be allocated to by default. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. For example:
vcpu_pin_set = “4-12,^8,15”
virt_mkfs = [] (Multi-valued) Name of the mkfs commands for ephemeral device. The format is <os_type>=<mkfs command>
Description of bare metal configuration options
Configuration option = Default value Description
[ironic]  
admin_password = None (String) DEPRECATED: Ironic keystone admin password. Use password instead.
admin_tenant_name = None (String) DEPRECATED: Ironic keystone tenant name. Use project_name instead.
admin_url = None (String) DEPRECATED: Keystone public API endpoint. Use auth_url instead.
admin_username = None (String) DEPRECATED: Ironic keystone admin name. Use username instead.
api_endpoint = http://ironic.example.org:6385/ (String) URL override for the Ironic API endpoint.
api_max_retries = 60

(Integer) The number of times to retry when a request conflicts. If set to 0, only try once, no retries.

Related options:

  • api_retry_interval
api_retry_interval = 2

(Integer) The number of seconds to wait before retrying the request.

Related options:

  • api_max_retries
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
timeout = None (Integer) Timeout value for http requests
Description of IPv6 configuration options
Configuration option = Default value Description
[DEFAULT]  
fixed_range_v6 = fd00::/48

(String) This option determines the fixed IPv6 address block when creating a network.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

Any valid IPv6 CIDR. The default value is “fd00::/48”.

Related options:

use_neutron
gateway_v6 = None

(String) This is the default IPv6 gateway. It is used only in the testing suite.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

Any valid IP address.

Related options:

use_neutron, gateway
ipv6_backend = rfc2462

(String) Abstracts out IPv6 address generation to pluggable backends.

nova-network can be put into dual-stack mode, so that it uses both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances acquire IPv6 global unicast addresses with the help of stateless address auto-configuration mechanism.

Related options:

  • use_neutron: this option only works with nova-network.
  • use_ipv6: this option only works if ipv6 is enabled for nova-network.
use_ipv6 = False

(Boolean) Assign IPv6 and IPv4 addresses when creating instances.

Related options:

  • use_neutron: this only works with nova-network.
Description of key manager configuration options
Configuration option = Default value Description
[key_manager]  
api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager (String) The full class name of the key manager API class
fixed_key = None

(String) Fixed key returned by key manager, specified in hex.

Possible values:

  • Empty string or a key in hex value
Description of LDAP configuration options
Configuration option = Default value Description
[DEFAULT]  
ldap_dns_base_dn = ou=hosts,dc=example,dc=org (String) Base DN for DNS entries in LDAP
ldap_dns_password = password (String) Password for LDAP DNS
ldap_dns_servers = ['dns.example.org'] (Multi-valued) DNS Servers for LDAP DNS driver
ldap_dns_soa_expiry = 86400 (String) Expiry interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_soa_hostmaster = hostmaster@example.org (String) Hostmaster for LDAP DNS driver Statement of Authority
ldap_dns_soa_minimum = 7200 (String) Minimum interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_soa_refresh = 1800 (String) Refresh interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_soa_retry = 3600 (String) Retry interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_url = ldap://ldap.example.com:389 (String) URL for LDAP server which will store DNS entries
ldap_dns_user = uid=admin,ou=people,dc=example,dc=org (String) User for LDAP DNS
Description of Libvirt configuration options
Configuration option = Default value Description
[DEFAULT]  
remove_unused_base_images = True (Boolean) Should unused base images be removed?
remove_unused_original_minimum_age_seconds = 86400 (Integer) Unused unresized base images younger than this will not be removed
[libvirt]  
checksum_base_images = False (Boolean) DEPRECATED: Write a checksum for files in _base to disk The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level.
checksum_interval_seconds = 3600 (Integer) DEPRECATED: How frequently to checksum base images The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level.
connection_uri =

(String) Overrides the default libvirt URI of the chosen virtualization type.

If set, Nova will use this URI to connect to libvirt.

Possible values:

  • An URI like qemu:///system or xen+ssh://oirase/ for example. This is only necessary if the URI differs to the commonly known URIs for the chosen virtualization type.

Related options:

  • virt_type: Influences what is used as default value here.
cpu_mode = None

(String) Is used to set the CPU mode an instance should have.

If virt_type=”kvm|qemu”, it will default to “host-model”, otherwise it will default to “none”.

Possible values:

  • host-model: Clones the host CPU feature flags.
  • host-passthrough: Use the host CPU model exactly;
  • custom: Use a named CPU model;
  • none: Not set any CPU model.

Related options:

  • cpu_model: If custom is used for cpu_mode, set this config option too, otherwise this would result in an error and the instance won’t be launched.
cpu_model = None

(String) Set the name of the libvirt CPU model the instance should use.

Possible values:

  • The names listed in /usr/share/libvirt/cpu_map.xml

Related options:

  • cpu_mode: Don’t set this when cpu_mode is NOT set to custom. This would result in an error and the instance won’t be launched.
  • virt_type: Only the virtualization types kvm and qemu use this.
disk_cachemodes = (List) Specific cachemodes to use for different disk types e.g: file=directsync,block=none
disk_prefix = None

(String) Override the default disk prefix for the devices attached to an instance.

If set, this is used to identify a free disk device name for a bus.

Possible values:

  • Any prefix which will result in a valid disk device name like ‘sda’ or ‘hda’ for example. This is only necessary if the device names differ to the commonly known device name prefixes for a virtualization type such as: sd, xvd, uvd, vd.

Related options:

  • virt_type: Influences which device type is used, which determines the default disk prefix.
enabled_perf_events =

(List) This is a performance event list which could be used as monitor. These events will be passed to libvirt domain xml while creating a new instances. Then event statistics data can be collected from libvirt. The minimum libvirt version is 2.0.0. For more information about Performance monitoring events, refer https://libvirt.org/formatdomain.html#elementsPerf .

  • Possible values: A string list. For example: enabled_perf_events = cmt, mbml, mbmt
The supported events list can be found in https://libvirt.org/html/libvirt-libvirt-domain.html , which you may need to search key words VIR_PERF_PARAM_*
  • Services that use this:
nova-compute
  • Related options: None
gid_maps = (List) List of guid targets and ranges.Syntax is guest-gid:host-gid:countMaximum of 5 allowed.
hw_disk_discard = None (String) Discard option for nova managed disks. Need Libvirt(1.0.6) Qemu1.5 (raw format) Qemu1.6(qcow2 format)
hw_machine_type = None (List) For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the “virsh capabilities”command. The format of the value for this config option is host-arch=machine-type. For example: x86_64=machinetype1,armv7l=machinetype2
image_info_filename_pattern = $instances_path/$image_cache_subdirectory_name/%(image)s.info (String) DEPRECATED: Allows image information files to be stored in non-standard locations Image info files are no longer used by the image cache
images_rbd_ceph_conf = (String) Path to the ceph configuration file to use
images_rbd_pool = rbd (String) The RADOS pool in which rbd volumes are stored
images_type = default (String) VM Images format. If default is specified, then use_cow_images flag is used instead of this one.
images_volume_group = None (String) LVM Volume Group that is used for VM images, when you specify images_type=lvm.
inject_key = False

(Boolean) Allow the injection of an SSH key at boot time.

There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the authorized_keys of that user. The SELinux context will be set if necessary. Be aware that the injection is not possible when the instance gets launched from a volume.

This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service.

Related options:

  • inject_partition: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.
inject_partition = -2

(Integer) Determines the way how the file system is chosen to inject data into it.

libguestfs will be used a first solution to inject data. If that’s not available on the host, the image will be locally mounted on the host as a fallback solution. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won’t be boot.

Possible values:

  • -2 => disable the injection of data.
  • -1 => find the root partition with the file system to mount with libguestfs
  • 0 => The image is not partitioned
  • >0 => The number of the partition to use for the injection

Related options:

  • inject_key: If this option allows the injection of a SSH key it depends on value greater or equal to -1 for inject_partition.
  • inject_password: If this option allows the injection of an admin password it depends on value greater or equal to -1 for inject_partition.
  • guestfs You can enable the debug log level of libguestfs with this config option. A more verbose output will help in debugging issues.
  • virt_type: If you use lxc as virt_type it will be treated as a single partition image
inject_password = False

(Boolean) Allow the injection of an admin password for instance only at create and rebuild process.

There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won’t be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume.

Possible values:

  • True: Allows the injection.
  • False (default): Disallows the injection. Any via the REST API provided admin password will be silently ignored.

Related options:

  • inject_partition: That option will decide about the discovery and usage of the file system. It also can disable the injection at all.
iscsi_iface = None (String) The iSCSI transport iface to use to connect to target in case offload support is desired. Default format is of the form <transport_name>.<hwaddress> where <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name.
iser_use_multipath = False (Boolean) Use multipath connection of the iSER volume
mem_stats_period_seconds = 10 (Integer) A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics.
realtime_scheduler_priority = 1 (Integer) In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99)
remove_unused_resized_minimum_age_seconds = 3600 (Integer) Unused resized base images younger than this will not be removed
rescue_image_id = None

(String) The ID of the image to boot from to rescue data from a corrupted instance.

If the rescue REST API operation doesn’t provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used.

Possible values:

  • An ID of an image or nothing. If it points to an Amazon Machine Image (AMI), consider to set the config options rescue_kernel_id and rescue_ramdisk_id too. If nothing is set, the image of the instance is used.

Related options:

  • rescue_kernel_id: If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon‘s AMI/AKI/ARI image format is used for the rescue image.
  • rescue_ramdisk_id: If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used if, specified. This is the case when Amazon‘s AMI/AKI/ARI image format is used for the rescue image.
rescue_kernel_id = None

(String) The ID of the kernel (AKI) image to use with the rescue image.

If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon‘s AMI/AKI/ARI image format is used for the rescue image.

Possible values:

  • An ID of an kernel image or nothing. If nothing is specified, the kernel disk from the instance is used if it was launched with one.

Related options:

  • rescue_image_id: If that option points to an image in Amazon‘s AMI/AKI/ARI image format, it’s useful to use rescue_kernel_id too.
rescue_ramdisk_id = None

(String) The ID of the RAM disk (ARI) image to use with the rescue image.

If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon‘s AMI/AKI/ARI image format is used for the rescue image.

Possible values:

  • An ID of a RAM disk image or nothing. If nothing is specified, the RAM disk from the instance is used if it was launched with one.

Related options:

  • rescue_image_id: If that option points to an image in Amazon‘s AMI/AKI/ARI image format, it’s useful to use rescue_ramdisk_id too.
rng_dev_path = None (String) A path to a device that will be used as source of entropy on the host. Permitted options are: /dev/random or /dev/hwrng
snapshot_compression = False (Boolean) Compress snapshot images when possible. This currently applies exclusively to qcow2 images
snapshot_image_format = None (String) Snapshot image format. Defaults to same as source image
snapshots_directory = $instances_path/snapshots (String) Location where libvirt driver will store snapshots before uploading them to image service
sparse_logical_volumes = False (Boolean) Create sparse logical volumes (with virtualsize) if this flag is set to True.
sysinfo_serial = auto (String) The data source used to the populate the host “serial” UUID exposed to guest in the virtual BIOS.
uid_maps = (List) List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5 allowed.
use_usb_tablet = True

(Boolean) DEPRECATED: Enable a mouse cursor within a graphical VNC or SPICE sessions.

This will only be taken into account if the VM is fully virtualized and VNC and/or SPICE is enabled. If the node doesn’t support a graphical framebuffer, then it is valid to set this to False.

Related options:

  • [vnc]enabled: If VNC is enabled, use_usb_tablet will have an effect.
  • [spice]enabled + [spice].agent_enabled: If SPICE is enabled and the spice agent is disabled, the config value of use_usb_tablet will have an effect. This option is being replaced by the ‘pointer_model’ option.
use_virtio_for_bridges = True (Boolean) Use virtio for bridge interfaces with KVM/QEMU
virt_type = kvm

(String) Describes the virtualization type (or so called domain type) libvirt should use.

The choice of this type must match the underlying virtualization strategy you have chosen for this host.

Possible values:

  • See the predefined set of case-sensitive values.

Related options:

  • connection_uri: depends on this
  • disk_prefix: depends on this
  • cpu_mode: depends on this
  • cpu_model: depends on this
volume_clear = zero (String) Method used to wipe old volumes.
volume_clear_size = 0 (Integer) Size in MiB to wipe at start of old volumes. 0 => all
volume_use_multipath = False (Boolean) Use multipath connection of the iSCSI or FC volume
vzstorage_cache_path = None

(String) Path to the SSD cache file.

You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client’s SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility.

This option defines the path which should include “%(cluster_name)s” template to separate caches from multiple shares.

  • Services that use this:
nova-compute
  • Related options:
vzstorage_mount_opts may include more detailed cache options.
vzstorage_log_path = /var/log/pstorage/%(cluster_name)s/nova.log.gz

(String) Path to vzstorage client log.

This option defines the log of cluster operations, it should include “%(cluster_name)s” template to separate logs from multiple shares.

  • Services that use this:
nova-compute
  • Related options:
vzstorage_mount_opts may include more detailed logging options.
vzstorage_mount_group = qemu

(String) Mount owner group name.

This option defines the owner group of Vzstorage cluster mountpoint.

  • Services that use this:
nova-compute
  • Related options:
vzstorage_mount_* group of parameters
vzstorage_mount_opts =

(List) Extra mount options for pstorage-mount

For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: “[‘-v’, ‘-R’, ‘500’]” Shouldn’t include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options.

  • Services that use this:
nova-compute
  • Related options:
All other vzstorage_* options
vzstorage_mount_perms = 0770

(String) Mount access mode.

This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0’s.

  • Services that use this:
nova-compute
  • Related options:
vzstorage_mount_* group of parameters
vzstorage_mount_point_base = $state_path/mnt

(String) Directory where the Virtuozzo Storage clusters are mounted on the compute node.

This option defines non-standard mountpoint for Vzstorage cluster.

  • Services that use this:
nova-compute
  • Related options:
vzstorage_mount_* group of parameters
vzstorage_mount_user = stack

(String) Mount owner user name.

This option defines the owner user of Vzstorage cluster mountpoint.

  • Services that use this:
nova-compute
  • Related options:
vzstorage_mount_* group of parameters
wait_soft_reboot_seconds = 120 (Integer) Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window.
Description of live migration configuration options
Configuration option = Default value Description
[DEFAULT]  
live_migration_retry_count = 30

(Integer) Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continously sends live-migration request to same host leading to concurrent request to iptables.

Possible values:

  • Any positive integer representing retry count.
max_concurrent_live_migrations = 1

(Integer) Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment.

Possible values:

  • 0 : treated as unlimited.
  • Negative value defaults to 0.
  • Any positive integer representing maximum number of live migrations to run concurrently.
[libvirt]  
live_migration_bandwidth = 0 (Integer) Maximum bandwidth(in MiB/s) to be used during migration. If set to 0, will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details
live_migration_completion_timeout = 800 (Integer) Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts. - Mutable This option can be changed without restarting.
live_migration_downtime = 500 (Integer) Maximum permitted downtime, in milliseconds, for live migration switchover. Will be rounded up to a minimum of 100ms. Use a large value if guest liveness is unimportant.
live_migration_downtime_delay = 75 (Integer) Time to wait, in seconds, between each step increase of the migration downtime. Minimum delay is 10 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device
live_migration_downtime_steps = 10 (Integer) Number of incremental steps to reach max downtime value. Will be rounded up to a minimum of 3 steps
live_migration_inbound_addr = None (String) Live migration target ip or hostname (if this option is set to None, which is the default, the hostname of the migration target compute node will be used)
live_migration_permit_auto_converge = False

(Boolean) This option allows nova to start live migration with auto converge on. Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use. Auto converge requires libvirt>=1.2.3 and QEMU>=1.6.0.

Related options:

  • live_migration_permit_post_copy
live_migration_permit_post_copy = False

(Boolean) This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0.

When permitted, post-copy mode will be automatically activated if a live-migration memory copy iteration does not make percentage increase of at least 10% over the last iteration.

The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete.

When using post-copy mode, if the source and destination hosts loose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide.

Related options:

  • live_migration_permit_auto_converge
live_migration_progress_timeout = 150 (Integer) Time to wait, in seconds, for migration to make forward progress in transferring data before aborting the operation. Set to 0 to disable timeouts. - Mutable This option can be changed without restarting.
live_migration_tunnelled = False (Boolean) Whether to use tunnelled migration, where migration data is transported over the libvirtd connection. If True, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor.
live_migration_uri = None (String) Override the default libvirt live migration target URI (which is dependent on virt_type) (any included “%s” is replaced with the migration target hostname)
Description of metadata configuration options
Configuration option = Default value Description
[DEFAULT]  
metadata_cache_expiration = 15 (Integer) This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect.
metadata_host = $my_ip

(String) This option determines the IP address for the network metadata API server.

Possible values:

  • Any valid IP address. The default is the address of the Nova API server.

Related options:

  • metadata_port
metadata_listen = 0.0.0.0 (String) The IP address on which the metadata API will listen.
metadata_listen_port = 8775 (Port number) The port on which the metadata API will listen.
metadata_manager = nova.api.manager.MetadataManager (String) DEPRECATED: OpenStack metadata service manager
metadata_port = 8775

(Port number) This option determines the port used for the metadata API server.

Related options:

  • metadata_host
metadata_workers = None (Integer) Number of workers for metadata service. The default will be the number of CPUs available.
vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData

(String) DEPRECATED: When returning instance metadata, this is the class that is used for getting vendor metadata when that class isn’t specified in the individual request. The value should be the full dot-separated path to the class to use.

Possible values:

  • Any valid dot-separated class path that can be imported.
vendordata_dynamic_connect_timeout = 5

(Integer) Maximum wait time for an external REST service to connect.

Possible values:

  • Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small.

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_read_timeout
vendordata_dynamic_read_timeout = 5

(Integer) Maximum wait time for an external REST service to return data once connected.

Possible values:

  • Any integer. Note that instance start is blocked during this wait time, so this value should be kept small.

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
vendordata_dynamic_ssl_certfile =

(String) Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against.

Possible values:

  • An empty string, or a path to a valid certificate file

Related options:

  • vendordata_providers
  • vendordata_dynamic_targets
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout
vendordata_dynamic_targets =

(List) A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>.

The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference.

vendordata_jsonfile_path = None

(String) Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary.

Possible values:

  • Any string representing the path to the data file, or an empty string (default).
vendordata_providers =

(List) A list of vendordata providers.

vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. There are currently two supported providers: StaticJSON and DynamicJSON.

StaticJSON reads a JSON file configured by the flag vendordata_jsonfile_path and places the JSON from that file into vendor_data.json and vendor_data2.json.

DynamicJSON is configured via the vendordata_dynamic_targets flag, which is documented separately. For each of the endpoints specified in that flag, a section is added to the vendor_data2.json.

For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference.

Possible values:

  • A list of vendordata providers, with StaticJSON and DynamicJSON being current options.

Related options:

  • vendordata_dynamic_targets
  • vendordata_dynamic_ssl_certfile
  • vendordata_dynamic_connect_timeout
  • vendordata_dynamic_read_timeout
Description of network configuration options
Configuration option = Default value Description
[DEFAULT]  
allow_same_net_traffic = True

(Boolean) Determine whether to allow network traffic from same network.

When set to true, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project.

This option only applies when using the nova-network service. When using another networking services, such as Neutron, security groups or other approaches should be used.

Possible values:

  • True: Network traffic should be allowed pass between all instances on the same network, regardless of their tenant and security policies
  • False: Network traffic should not be allowed pass between instances unless it is unblocked in a security group

Interdependencies to other options:

  • use_neutron: This must be set to False to enable nova-network networking
  • firewall_driver: This must be set to nova.virt.libvirt.firewall.IptablesFirewallDriver to ensure the libvirt firewall driver is enabled.
auto_assign_floating_ip = False

(Boolean) Autoassigning floating IP to VM

When set to True, floating IP is auto allocated and associated to the VM upon creation.

cnt_vpn_clients = 0

(Integer) This option represents the number of IP addresses to reserve at the top of the address range for VPN clients. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

Any integer, 0 or greater. The default is 0.

Related options:

use_neutron, network_manager
create_unique_mac_address_attempts = 5

(Integer) This option determines how many times nova-network will attempt to create a unique MAC address before giving up and raising a VirtualInterfaceMacAddressException error.

Possible values:

Any positive integer. The default is 5.

Related options:

use_neutron
default_access_ip_network_name = None

(String) Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen.

Possible values:

  • None (default)
  • Any string representing network name.
default_floating_pool = nova

(String) Default pool for floating IPs.

This option specifies the default floating IP pool for allocating floating IPs.

While allocating a floating ip, users can optionally pass in the name of the pool they want to allocate from, otherwise it will be pulled from the default pool.

If this option is not set, then ‘nova’ is used as default floating pool.

Possible values:

  • Any string representing a floating IP pool name
defer_iptables_apply = False (Boolean) Whether to batch up the application of IPTables rules during a host restart and apply all at the end of the init phase.
dhcp_domain = novalocal

(String) This option allows you to specify the domain for the DHCP server.

Possible values:

Any string that is a valid domain name.

Related options:

use_neutron
dhcp_lease_time = 86400

(Integer) The lifetime of a DHCP lease, in seconds. The default is 86400 (one day).

Possible values:

Any positive integer value.
dhcpbridge = $bindir/nova-dhcpbridge

(String) The location of the binary nova-dhcpbridge. By default it is the binary named ‘nova-dhcpbridge’ that is installed with all the other nova binaries.

Possible values:

Any string representing the full path to the binary for dhcpbridge
dhcpbridge_flagfile = ['/etc/nova/nova-dhcpbridge.conf']

(Multi-valued) This option is a list of full paths to one or more configuration files for dhcpbridge. In most cases the default path of ‘/etc/nova/nova-dhcpbridge.conf’ should be sufficient, but if you have special needs for configuring dhcpbridge, you can change or add to this list.

Possible values

A list of strings, where each string is the full path to a dhcpbridge configuration file.
dns_server = []

(Multi-valued) Despite the singular form of the name of this option, it is actually a list of zero or more server addresses that dnsmasq will use for DNS nameservers. If this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use the servers specified in this option. If the option use_network_dns_servers is True, the dns1 and dns2 servers from the network will be appended to this list, and will be used as DNS servers, too.

Possible values:

A list of strings, where each string is either an IP address or a FQDN.

Related options:

use_network_dns_servers
dns_update_periodic_interval = -1

(Integer) This option determines the time, in seconds, to wait between refreshing DNS entries for the network.

Possible values:

Either -1 (default), or any positive integer. A negative value will disable the updates.

Related options:

use_neutron
dnsmasq_config_file =

(String) The path to the custom dnsmasq configuration file, if any.

Possible values:

The full path to the configuration file, or an empty string if there is no custom dnsmasq configuration file.
ebtables_exec_attempts = 3

(Integer) This option determines the number of times to retry ebtables commands before giving up. The minimum number of retries is 1.

Possible values:

  • Any positive integer

Related options:

  • ebtables_retry_interval
ebtables_retry_interval = 1.0

(Floating point) This option determines the time, in seconds, that the system will sleep in between ebtables retries. Note that each successive retry waits a multiple of this value, so for example, if this is set to the default of 1.0 seconds, and ebtables_exec_attempts is 4, after the first failure, the system will sleep for 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and after the third failure it will sleep 3 * 1.0 seconds.

Possible values:

  • Any non-negative float or integer. Setting this to zero will result in no waiting between attempts.

Related options:

  • ebtables_exec_attempts
firewall_driver = None

(String) Firewall driver to use with nova-network service.

This option only applies when using the nova-network service. When using another networking services, such as Neutron, this should be to set to the nova.virt.firewall.NoopFirewallDriver.

If unset (the default), this will default to the hypervisor-specified default driver.

Possible values:

  • nova.virt.firewall.IptablesFirewallDriver
  • nova.virt.firewall.NoopFirewallDriver
  • nova.virt.libvirt.firewall.IptablesFirewallDriver
  • [...]

Interdependencies to other options:

  • use_neutron: This must be set to False to enable nova-network networking
fixed_ip_disassociate_timeout = 600

(Integer) This is the number of seconds to wait before disassociating a deallocated fixed IP address. This is only used with the nova-network service, and has no effect when using neutron for networking.

Possible values:

Any integer, zero or greater. The default is 600 (10 minutes).

Related options:

use_neutron
flat_injected = False (Boolean) This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware and xenapi virt drivers to control whether network information is injected into a VM.
flat_interface = None

(String) This option is the name of the virtual interface of the VM on which the bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt for the bridge interface name.

Possible values:

Any valid virtual interface name, such as ‘eth0’
flat_network_bridge = None

(String) This option determines the bridge used for simple network interfaces when no bridge is specified in the VM creation request.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

Any string representing a valid network bridge, such as ‘br100’

Related options:

use_neutron
flat_network_dns = 8.8.4.4

(String) This is the address of the DNS server for a simple network. If this option is not specified, the default of ‘8.8.4.4’ is used.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

Any valid IP address.

Related options:

use_neutron
floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver

(String) Full class name for the DNS Manager for floating IPs.

This option specifies the class of the driver that provides functionality to manage DNS entries associated with floating IPs.

When a user adds a DNS entry for a specified domain to a floating IP, nova will add a DNS entry using the specified floating DNS driver. When a floating IP is deallocated, its DNS entry will automatically be deleted.

Possible values:

  • Full Python path to the class to be used
force_dhcp_release = True

(Boolean) When this option is True, a call is made to release the DHCP for the instance when that instance is terminated.

Related options:

use_neutron
force_snat_range = []

(Multi-valued) This is a list of zero or more IP ranges that traffic from the routing_source_ip will be SNATted to. If the list is empty, then no SNAT rules are created.

Possible values:

A list of strings, each of which should be a valid CIDR.

Related options:

routing_source_ip
forward_bridge_interface = ['all']

(Multi-valued) One or more interfaces that bridges can forward traffic to. If any of the items in this list is the special keyword ‘all’, then all traffic will be forwarded.

Possible values:

A list of zero or more interface names, or the word ‘all’.
gateway = None

(String) This is the default IPv4 gateway. It is used only in the testing suite.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

Any valid IP address.

Related options:

use_neutron, gateway_v6
injected_network_template = $pybasedir/nova/virt/interfaces.template (String) Template file for injected network
instance_dns_domain = (String) If specified, Nova checks if the availability_zone of every instance matches what the database says the availability_zone should be for the specified dns_domain.
instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver

(String) Full class name for the DNS Manager for instance IPs.

This option specifies the class of the driver that provides functionality to manage DNS entries for instances.

On instance creation, nova will add DNS entries for the instance name and id, using the specified instance DNS driver and domain. On instance deletion, nova will remove the DNS entries.

Possible values:

  • Full Python path to the class to be used
iptables_bottom_regex =

(String) This expression, if defined, will select any matching iptables rules and place them at the bottom when applying metadata changes to the rules.

Possible values:

  • Any string representing a valid regular expression, or an empty string

Related options:

  • iptables_top_regex
iptables_drop_action = DROP

(String) By default, packets that do not pass the firewall are DROPped. In many cases, though, an operator may find it more useful to change this from DROP to REJECT, so that the user issuing those packets may have a better idea as to what’s going on, or LOGDROP in order to record the blocked traffic before DROPping.

Possible values:

  • A string representing an iptables chain. The default is DROP.
iptables_top_regex =

(String) This expression, if defined, will select any matching iptables rules and place them at the top when applying metadata changes to the rules.

Possible values:

  • Any string representing a valid regular expression, or an empty string

Related options:

  • iptables_bottom_regex
l3_lib = nova.network.l3.LinuxNetL3

(String) This option allows you to specify the L3 management library to be used.

Possible values:

Any dot-separated string that represents the import path to an L3 networking library.

Related options:

use_neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver

(String) This is the class used as the ethernet device driver for linuxnet bridge operations. The default value should be all you need for most cases, but if you wish to use a customized class, set this option to the full dot-separated import path for that class.

Possible values:

Any string representing a dot-separated class path that Nova can import.
linuxnet_ovs_integration_bridge = br-int

(String) The name of the Open vSwitch bridge that is used with linuxnet when connecting with Open vSwitch.”

Possible values:

Any string representing a valid bridge name.
multi_host = False (Boolean) Default value for multi_host in networks. Also, if set, some rpc network calls will be sent directly to host.
network_allocate_retries = 0

(Integer) Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails.

Possible values:

  • Any positive integer representing retry count.
network_driver = nova.network.linux_net (String) Driver to use for network creation
network_manager = nova.network.manager.VlanManager (String) Full class name for the Manager for network
network_size = 256

(Integer) This option determines the number of addresses in each private subnet.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

Any positive integer that is less than or equal to the available network size. Note that if you are creating multiple networks, they must all fit in the available IP address space. The default is 256.

Related options:

use_neutron, num_networks
network_topic = network (String) The topic network nodes listen on
networks_path = $state_path/networks

(String) The location where the network configuration files will be kept. The default is the ‘networks’ directory off of the location where nova’s Python module is installed.

Possible values

A string containing the full path to the desired configuration directory
num_networks = 1

(Integer) This option represents the number of networks to create if not explicitly specified when the network is created. The only time this is used is if a CIDR is specified, but an explicit network_size is not. In that case, the subnets are created by diving the IP address space of the CIDR by num_networks. The resulting subnet sizes cannot be larger than the configuration option network_size; in that event, they are reduced to network_size, and a warning is logged.

Please note that this option is only used when using nova-network instead of Neutron in your deployment.

Possible values:

Any positive integer is technically valid, although there are practical limits based upon available IP address space and virtual interfaces. The default is 1.

Related options:

use_neutron, network_size
ovs_vsctl_timeout = 120

(Integer) This option represents the period of time, in seconds, that the ovs_vsctl calls will wait for a response from the database before timing out. A setting of 0 means that the utility should wait forever for a response.

Possible values:

  • Any positive integer if a limited timeout is desired, or zero if the calls should wait forever for a response.
public_interface = eth0

(String) This is the name of the network interface for public IP addresses. The default is ‘eth0’.

Possible values:

Any string representing a network interface name
routing_source_ip = $my_ip

(String) This is the public IP address of the network host. It is used when creating a SNAT rule.

Possible values:

Any valid IP address

Related options:

force_snat_range
send_arp_for_ha = False

(Boolean) When True, when a device starts up, and upon binding floating IP addresses, arp messages will be sent to ensure that the arp caches on the compute hosts are up-to-date.

Related options:

send_arp_for_ha_count
send_arp_for_ha_count = 3

(Integer) When arp messages are configured to be sent, they will be sent with the count set to the value of this option. Of course, if this is set to zero, no arp messages will be sent.

Possible values:

Any integer greater than or equal to 0

Related options:

send_arp_for_ha
share_dhcp_address = False

(Boolean) DEPRECATED: THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK.

If True in multi_host mode, all compute hosts share the same dhcp address. The same IP address used for DHCP will be added on each nova-network node which is only visible to the VMs on the same host.

The use of this configuration has been deprecated and may be removed in any release after Mitaka. It is recommended that instead of relying on this option, an explicit value should be passed to ‘create_networks()’ as a keyword argument with the name ‘share_address’.

teardown_unused_network_gateway = False

(Boolean) Determines whether unused gateway devices, both VLAN and bridge, are deleted if the network is in nova-network VLAN mode and is multi-hosted.

Related options:

use_neutron, vpn_ip, fake_network
update_dns_entries = False

(Boolean) When this option is True, whenever a DNS entry must be updated, a fanout cast message is sent to all network hosts to update their DNS entries in multi-host mode.

Related options:

use_neutron
use_network_dns_servers = False

(Boolean) When this option is set to True, the dns1 and dns2 servers for the network specified by the user on boot will be used for DNS, as well as any specified in the dns_server option.

Related options:

dns_server
use_neutron = False (Boolean) Whether to use Neutron or Nova Network as the back end for networking. Defaults to False (indicating Nova network).Set to True to use neutron.
use_neutron_default_nets = False

(Boolean) When True, the TenantNetworkController will query the Neutron API to get the default networks to use.

Related options:

  • neutron_default_tenant_id
use_single_default_gateway = False (Boolean) When set to True, only the firt nic of a VM will get its default gateway from the DHCP server.
vlan_interface = None

(String) This option is the name of the virtual interface of the VM on which the VLAN bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt and xenapi for the bridge interface name.

Please note that this setting will be ignored in nova-network if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

Any valid virtual interface name, such as ‘eth0’
vlan_start = 100

(Integer) This is the VLAN number used for private networks. Note that the when creating the networks, if the specified number has already been assigned, nova-network will increment this number until it finds an available VLAN.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

Any integer between 1 and 4094. Values outside of that range will raise a ValueError exception. Default = 100.

Related options:

network_manager, use_neutron
[libvirt]  
remote_filesystem_transport = ssh (String) Use ssh or rsync transport for creating, copying, removing files on the remote host.
[os_vif_linux_bridge]  
flat_interface = None (String) FlatDhcp will bridge into this interface if set
forward_bridge_interface = ['all'] (Multi-valued) An interface that bridges can forward to. If this is set to all then all traffic will be forwarded. Can be specified multiple times.
iptables_bottom_regex = (String) Regular expression to match the iptables rule that should always be on the bottom.
iptables_drop_action = DROP (String) The table that iptables to jump to when a packet is to be dropped.
iptables_top_regex = (String) Regular expression to match the iptables rule that should always be on the top.
network_device_mtu = 1500 (Integer) MTU setting for network interface.
use_ipv6 = False (Boolean) Use IPv6
vlan_interface = None (String) VLANs will bridge into this interface if set
[os_vif_ovs]  
network_device_mtu = 1500 (Integer) MTU setting for network interface.
ovs_vsctl_timeout = 120 (Integer) Amount of time, in seconds, that ovs_vsctl should wait for a response from the database. 0 is to wait forever.
[vif_plug_linux_bridge_privileged]  
capabilities = [] (Unknown) List of Linux capabilities retained by the privsep daemon.
group = None (String) Group that the privsep daemon should run as.
helper_command = None (String) Command to invoke to start the privsep daemon if not using the “fork” method. If not specified, a default is generated using “sudo privsep-helper” and arguments designed to recreate the current configuration. This command must accept suitable –privsep_context and –privsep_sock_path arguments.
user = None (String) User that the privsep daemon should run as.
[vif_plug_ovs_privileged]  
capabilities = [] (Unknown) List of Linux capabilities retained by the privsep daemon.
group = None (String) Group that the privsep daemon should run as.
helper_command = None (String) Command to invoke to start the privsep daemon if not using the “fork” method. If not specified, a default is generated using “sudo privsep-helper” and arguments designed to recreate the current configuration. This command must accept suitable –privsep_context and –privsep_sock_path arguments.
user = None (String) User that the privsep daemon should run as.
[vmware]  
vlan_interface = vmnic0

(String) This option specifies the physical ethernet adapter name for VLAN networking.

Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic.

Possible values:

  • Any valid string representing VLAN interface name
Description of neutron configuration options
Configuration option = Default value Description
[DEFAULT]  
neutron_default_tenant_id = default

(String) Tenant ID for getting the default network from Neutron API (also referred in some places as the ‘project ID’) to use.

Related options:

  • use_neutron_default_nets
[neutron]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
extension_sync_interval = 600 (Integer) Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
metadata_proxy_shared_secret =

(String) This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the ‘X-Metadata-Provider-Signature’ header must be supplied in the request.

Related options:

  • service_metadata_proxy
ovs_bridge = br-int

(String) Specifies the name of an integration bridge interface used by OpenvSwitch. This option is used only if Neutron does not specify the OVS bridge name.

Possible values:

  • Any string representing OVS bridge name.
region_name = RegionOne

(String) Region name for connecting to Neutron in admin context.

This option is used in multi-region setups. If there are two Neutron servers running in two regions in two different machines, then two services need to be created in Keystone with two different regions and associate corresponding endpoints to those services. When requests are made to Keystone, the Keystone service uses the region_name to determine the region the request is coming from.

service_metadata_proxy = False

(Boolean) When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the ‘X-Instance-ID’ header.

Related options:

  • metadata_proxy_shared_secret
timeout = None (Integer) Timeout value for http requests
url = http://127.0.0.1:9696

(URI) This option specifies the URL for connecting to Neutron.

Possible values:

  • Any valid URL that points to the Neutron API service is appropriate here. This typically matches the URL returned for the ‘network’ service type from the Keystone service catalog.
Description of osbrick configuration options
Configuration option = Default value Description
[privsep_osbrick]  
capabilities = [] (Unknown) List of Linux capabilities retained by the privsep daemon.
group = None (String) Group that the privsep daemon should run as.
helper_command = None (String) Command to invoke to start the privsep daemon if not using the “fork” method. If not specified, a default is generated using “sudo privsep-helper” and arguments designed to recreate the current configuration. This command must accept suitable –privsep_context and –privsep_sock_path arguments.
user = None (String) User that the privsep daemon should run as.
Description of PCI configuration options
Configuration option = Default value Description
[DEFAULT]  
pci_alias = []

(Multi-valued) An alias for a PCI passthrough device requirement.

This allows users to specify the alias in the extra_spec for a flavor, without needing to repeat all the PCI property requirements.

Possible Values:

  • A list of JSON values which describe the aliases. For example:

pci_alias = { “name”: “QuickAssist”, “product_id”: “0443”, “vendor_id”: “8086”, “device_type”: “type-PCI” }

defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are :

  • “name”: Name of the PCI alias. * “product_id”: Product ID of the device in hexadecimal. * “vendor_id”: Vendor ID of the device in hexadecimal. * “device_type”: Type of PCI device. Valid values are: “type-PCI”, “type-PF” and “type-VF”.
pci_passthrough_whitelist = []

(Multi-valued) White list of PCI devices available to VMs.

Possible values:

  • A JSON dictionary which describe a whitelisted PCI device. It should take the following format:

[“vendor_id”: “<id>”,] [“product_id”: “<id>”,] [“address”: “[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]” | “devname”: “<name>”,] {“<tag>”: “<tag_value>”,}

Where ‘[‘ indicates zero or one occurrences, ‘{‘ indicates zero or multiple occurrences, and ‘|’ mutually exclusive options. Note that any missing fields are automatically wildcarded.

Valid key values are :

  • “vendor_id”: Vendor ID of the device in hexadecimal. * “product_id”: Product ID of the device in hexadecimal. * “address”: PCI address of the device. * “devname”: Device name of the device (for e.g. interface name). Not all PCI devices have a name. * “<tag>”: Additional <tag> and <tag_value> used for matching PCI devices. Supported <tag>: “physical_network”.

Valid examples are:

pci_passthrough_whitelist = {“devname”:”eth0”, “physical_network”:”physnet”} pci_passthrough_whitelist = {“address”:”:0a:00.“} pci_passthrough_whitelist = {“address”:”:0a:00.”, “physical_network”:”physnet1”} pci_passthrough_whitelist = {“vendor_id”:”1137”, “product_id”:”0071”} pci_passthrough_whitelist = {“vendor_id”:”1137”, “product_id”:”0071”, “address”: “0000:0a:00.1”, “physical_network”:”physnet1”}

The following are invalid, as they specify mutually exclusive options:

pci_passthrough_whitelist = {“devname”:”eth0”, “physical_network”:”physnet”, “address”:”:0a:00.“}

  • A JSON list of JSON dictionaries corresponding to the above format. For example:
pci_passthrough_whitelist = [{“product_id”:”0001”, “vendor_id”:”8086”}, {“product_id”:”0002”, “vendor_id”:”8086”}]
Description of periodic configuration options
Configuration option = Default value Description
[DEFAULT]  
periodic_enable = True (Boolean) Enable periodic tasks
periodic_fuzzy_delay = 60 (Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
Description of policy configuration options
Configuration option = Default value Description
[DEFAULT]  
allow_instance_snapshots = True (Boolean) Operators can turn off the ability for a user to take snapshots of their instances by setting this option to False. When disabled, any attempt to take a snapshot will result in a HTTP 400 response (“Bad Request”).
allow_resize_to_same_host = False (Boolean) Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options.
max_age = 0

(Integer) The number of seconds between subsequent usage refreshes. This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues. Note that quotas are not updated on a periodic task, they will update on a new reservation if max_age has passed since the last reservation.

Possible values:

  • 0 (default) or any positive integer representing number of seconds.
max_local_block_devices = 3

(Integer) Maximum number of devices that will result in a local image being created on the hypervisor node.

A negative number means unlimited. Setting max_local_block_devices to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of –image being used, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail.

Possible values:

  • 0: Creating a local disk is not allowed.
  • Negative number: Allows unlimited number of local discs.
  • Positive number: Allows only these many number of local discs. (Default value is 3).
osapi_compute_unique_server_name_scope =

(String) Sets the scope of the check for unique instance names.

The default doesn’t check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an ‘’InstanceExists’’ error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don’t have to distinguish among instances with the same name by their IDs.

Possible values:

  • ‘’: An empty value means that no uniqueness check is done and duplicate names are possible.
  • “project”: The instance name check is done only for instances within the same project.
  • “global”: The instance name check is done for all instances regardless of the project.
osapi_max_limit = 1000 (Integer) As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option.
password_length = 12 (Integer) Length of generated instance admin passwords.
reservation_expire = 86400

(Integer) The number of seconds until a reservation expires. It represents the time period for invalidating quota reservations.

Possible values:

  • 86400 (default) or any positive integer representing number of seconds.
resize_fs_using_block_device = False (Boolean) If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw).
until_refresh = 0

(Integer) The count of reservations until usage is refreshed. This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues.

Possible values:

  • 0 (default) or any positive integer.
Description of Quobyte USP volume driver configuration options
Configuration option = Default value Description
[libvirt]  
quobyte_client_cfg = None (String) Path to a Quobyte Client configuration file.
quobyte_mount_point_base = $state_path/mnt (String) Directory where the Quobyte volume is mounted on the compute node
Description of quota configuration options
Configuration option = Default value Description
[DEFAULT]  
bandwidth_poll_interval = 600 (Integer) Interval to pull network bandwidth usage info. Not supported on all hypervisors. Set to -1 to disable. Setting this to 0 will run at the default rate.
enable_network_quota = False

(Boolean) DEPRECATED: This option is used to enable or disable quota checking for tenant networks.

Related options:

  • quota_networks: CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated.
quota_cores = 20

(Integer) The number of instance cores or VCPUs allowed per project.

Possible values:

  • 20 (default) or any positive integer. * -1 : treated as unlimited.
quota_driver = nova.quota.DbQuotaDriver

(String) DEPRECATED: Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks.

Possible values:

  • nova.quota.DbQuotaDriver (default) or any string representing fully qualified class name.
quota_fixed_ips = -1

(Integer) The number of fixed IPs allowed per project (this should be at least the number of instances allowed). Unlike floating IPs, fixed IPs are allocated dynamically by the network component when instances boot up.

Possible values:

  • -1 (default) : treated as unlimited. * Any positive integer.
quota_floating_ips = 10

(Integer) The number of floating IPs allowed per project. Floating IPs are not allocated to instances by default. Users need to select them from the pool configured by the OpenStack administrator to attach to their instances.

Possible values:

  • 10 (default) or any positive integer. * -1 : treated as unlimited.
quota_injected_file_content_bytes = 10240

(Integer) The number of bytes allowed per injected file.

Possible values:

  • 10240 (default) or any positive integer representing number of bytes. * -1 : treated as unlimited.
quota_injected_file_path_length = 255

(Integer) The maximum allowed injected file path length.

Possible values:

  • 255 (default) or any positive integer. * -1 : treated as unlimited.
quota_injected_files = 5

(Integer) The number of injected files allowed. It allow users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted. Binary or zip files won’t work. During file injection, any existing files that match specified files are renamed to include .bak extension appended with a timestamp.

Possible values:

  • 5 (default) or any positive integer. * -1 : treated as unlimited.
quota_instances = 10

(Integer) The number of instances allowed per project.

Possible Values

  • 10 (default) or any positive integer. * -1 : treated as unlimited.
quota_key_pairs = 100

(Integer) The maximum number of key pairs allowed per user. Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project.

Possible values:

  • 100 (default) or any positive integer. * -1 : treated as unlimited.
quota_metadata_items = 128

(Integer) The number of metadata items allowed per instance. User can associate metadata while instance creation in the form of key-value pairs.

Possible values:

  • 128 (default) or any positive integer. * -1 : treated as unlimited.
quota_networks = 3

(Integer) DEPRECATED: This option controls the number of private networks that can be created per project (or per tenant).

Related options:

  • enable_network_quota: CRUD operations on tenant networks are only available when using nova-network and nova-network is itself deprecated.
quota_ram = 51200

(Integer) The number of megabytes of instance RAM allowed per project.

Possible values:

  • 51200 (default) or any positive integer. * -1 : treated as unlimited.
quota_security_group_rules = 20

(Integer) The number of security rules per security group. The associated rules in each security group control the traffic to instances in the group.

Possible values:

  • 20 (default) or any positive integer. * -1 : treated as unlimited.
quota_security_groups = 10

(Integer) The number of security groups per project.

Possible values:

  • 10 (default) or any positive integer. * -1 : treated as unlimited.
quota_server_group_members = 10

(Integer) Add quota values to constrain the number of servers per server group.

Possible values:

  • 10 (default) or any positive integer. * -1 : treated as unlimited.
quota_server_groups = 10

(Integer) Add quota values to constrain the number of server groups per project. Server group used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota.

Possible values:

  • 10 (default) or any positive integer. * -1 : treated as unlimited.
[cells]  
bandwidth_update_interval = 600

(Integer) Bandwidth update interval

Seconds between bandwidth usage cache updates for cells.

Possible values:

  • Time in seconds.
Description of RDP configuration options
Configuration option = Default value Description
[rdp]  
enabled = False

(Boolean) Enable Remote Desktop Protocol (RDP) related features.

Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V.

Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform.

Related options:

  • compute_driver: Must be hyperv.
html5_proxy_base_url = http://127.0.0.1:6083/

(String) The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance.

An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack.

An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect

Possible values:

  • <scheme>://<ip-address>:<port-number>/

The scheme must be identical to the scheme configured for the RDP HTML5 console proxy service.

The IP address must be identical to the address on which the RDP HTML5 console proxy service is listening.

The port must be identical to the port on which the RDP HTML5 console proxy service is listening.

Related options:

  • rdp.enabled: Must be set to True for html5_proxy_base_url to be effective.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of S3 configuration options
Configuration option = Default value Description
[DEFAULT]  
image_decryption_dir = /tmp (String) DEPRECATED: Parent directory for tempdir used for image decryption EC2 API related options are not supported.
s3_access_key = notchecked (String) DEPRECATED: Access key to use S3 server for images EC2 API related options are not supported.
s3_affix_tenant = False (Boolean) DEPRECATED: Whether to affix the tenant id to the access key when downloading from S3 EC2 API related options are not supported.
s3_host = $my_ip (String) DEPRECATED: Hostname or IP for OpenStack to use when accessing the S3 API EC2 API related options are not supported.
s3_port = 3333 (Port number) DEPRECATED: Port used when accessing the S3 API. It should be in the range of 1 - 65535 EC2 API related options are not supported.
s3_secret_key = notchecked (String) DEPRECATED: Secret key to use for S3 server for images EC2 API related options are not supported.
s3_use_ssl = False (Boolean) DEPRECATED: Whether to use SSL when talking to S3 EC2 API related options are not supported.
Description of serial console configuration options
Configuration option = Default value Description
[serial_console]  
base_url = ws://127.0.0.1:6083/

(String) The URL an end user would use to connect to the nova-serialproxy service.

The nova-serialproxy service is called with this token enriched URL and establishes the connection to the proper instance.

Possible values:

  • <scheme><IP-address><port-number>

Services which consume this:

  • nova-compute

Interdependencies to other options:

  • The IP address must be identical to the address to which the nova-serialproxy service is listening (see option serialproxy_host in this section).
  • The port must be the same as in the option serialproxy_port of this section.
  • If you choose to use a secured websocket connection, then start this option with wss:// instead of the unsecured ws://. The options cert and key in the [DEFAULT] section have to be set for that.
enabled = False

(Boolean) Enable the serial console feature.

In order to use this feature, the service nova-serialproxy needs to run. This service is typically executed on the controller node.

Possible values:

  • True: Enables the feature
  • False: Disables the feature

Services which consume this:

  • nova-compute

Interdependencies to other options:

  • None
port_range = 10000:20000

(String) A range of TCP ports a guest can use for its backend.

Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won’t get launched.

Possible values:

Each string which passes the regex \d+:\d+ For example 10000:20000. Be sure that the first port number is lower than the second port number.

Services which consume this:

  • nova-compute

Interdependencies to other options:

  • None
proxyclient_address = 127.0.0.1

(String) The IP address to which proxy clients (like nova-serialproxy) should connect to get the serial console of an instance.

This is typically the IP address of the host of a nova-compute service.

Possible values:

  • An IP address

Services which consume this:

  • nova-compute

Interdependencies to other options:

  • None
serialproxy_host = 0.0.0.0

(String) The IP address which is used by the nova-serialproxy service to listen for incoming requests.

The nova-serialproxy service listens on this IP address for incoming connection requests to instances which expose serial console.

Possible values:

  • An IP address

Services which consume this:

  • nova-serialproxy

Interdependencies to other options:

  • Ensure that this is the same IP address which is defined in the option base_url of this section or use 0.0.0.0 to listen on all addresses.
serialproxy_port = 6083

(Port number) The port number which is used by the nova-serialproxy service to listen for incoming requests.

The nova-serialproxy service listens on this port number for incoming connection requests to instances which expose serial console.

Possible values:

  • A port number

Services which consume this:

  • nova-serialproxy

Interdependencies to other options:

  • Ensure that this is the same port number which is defined in the option base_url of this section.
Description of SPICE configuration options
Configuration option = Default value Description
[spice]  
agent_enabled = True (Boolean) Enable the spice guest agent support.
enabled = False (Boolean) Enable spice related features.
html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html (String) Location of spice HTML5 console proxy, in the form “http://127.0.0.1:6082/spice_auto.html
html5proxy_host = 0.0.0.0 (String) Host on which to listen for incoming requests
html5proxy_port = 6082 (Port number) Port on which to listen for incoming requests
keymap = en-us (String) Keymap for spice
server_listen = 127.0.0.1 (String) IP address on which instance spice server should listen
server_proxyclient_address = 127.0.0.1 (String) The address to which proxy clients (like nova-spicehtml5proxy) should connect
Description of testing configuration options
Configuration option = Default value Description
[DEFAULT]  
fake_network = False (Boolean) This option is used mainly in testing to avoid calls to the underlying network utilities.
monkey_patch = False

(Boolean) Determine if monkey patching should be applied.

Related options:

  • monkey_patch_modules: This must have values set for this option to have any effect
monkey_patch_modules = nova.compute.api:nova.notifications.notify_decorator

(List) List of modules/decorators to monkey patch.

This option allows you to patch a decorator for all functions in specified modules.

Possible values:

  • nova.compute.api:nova.notifications.notify_decorator * nova.api.ec2.cloud:nova.notifications.notify_decorator * [...]

Related options:

  • monkey_patch: This must be set to True for this option to have any effect
Description of trusted computing configuration options
Configuration option = Default value Description
[trusted_computing]  
attestation_api_url = /OpenAttestationWebServices/V1.0

(String) The URL on the attestation server to use. See the attestation_server help text for more information about host verification.

This value must be just that path portion of the full URL, as it will be joined to the host specified in the attestation_server option.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.

  • Related options:
attestation_server attestation_server_ca_file attestation_port attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl
attestation_auth_blob = None

(String) Attestation servers require a specific blob that is used to authenticate. The content and format of the blob are determined by the particular attestation server being used. There is no default value; you must supply the value as specified by your attestation service. See the attestation_server help text for more information about host verification.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.

  • Related options:
attestation_server attestation_server_ca_file attestation_port attestation_api_url attestation_auth_timeout attestation_insecure_ssl
attestation_auth_timeout = 60

(Integer) This value controls how long a successful attestation is cached. Once this period has elapsed, a new attestation request will be made. See the attestation_server help text for more information about host verification.

The value is in seconds. Valid values must be positive integers for any caching; setting this to zero or a negative value will result in calls to the attestation_server for every request, which may impact performance.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.

  • Related options:
attestation_server attestation_server_ca_file attestation_port attestation_api_url attestation_auth_blob attestation_insecure_ssl
attestation_insecure_ssl = False

(Boolean) When set to True, the SSL certificate verification is skipped for the attestation service. See the attestation_server help text for more information about host verification.

Valid values are True or False. The default is False.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.

  • Related options:
attestation_server attestation_server_ca_file attestation_port attestation_api_url attestation_auth_blob attestation_auth_timeout
attestation_port = 8443

(String) The port to use when connecting to the attestation server. See the attestation_server help text for more information about host verification.

Valid values are strings, not integers, but must be digits only.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.

  • Related options:
attestation_server attestation_server_ca_file attestation_api_url attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl
attestation_server = None

(String) The host to use as the attestation server.

Cloud computing pools can involve thousands of compute nodes located at different geographical locations, making it difficult for cloud providers to identify a node’s trustworthiness. When using the Trusted filter, users can request that their VMs only be placed on nodes that have been verified by the attestation server specified in this option.

The value is a string, and can be either an IP address or FQDN.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.

  • Related options:
attestation_server_ca_file attestation_port attestation_api_url attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl
attestation_server_ca_file = None

(String) The absolute path to the certificate to use for authentication when connecting to the attestation server. See the attestation_server help text for more information about host verification.

The value is a string, and must point to a file that is readable by the scheduler.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.

  • Related options:
attestation_server attestation_port attestation_api_url attestation_auth_blob attestation_auth_timeout attestation_insecure_ssl
Description of upgrade levels configuration options
Configuration option = Default value Description
[cells]  
scheduler = nova.cells.scheduler.CellsScheduler

(String) Cells scheduler

The class of the driver used by the cells scheduler. This should be the full Python path to the class to be used. If nothing is specified in this option, the CellsScheduler is used.

[upgrade_levels]  
baseapi = None (String) Set a version cap for messages sent to the base api in any service
cells = None

(String) Cells version

Cells client-side RPC API version. Use this option to set a version cap for messages sent to local cells services.

Possible values:

  • None: This is the default value.
  • grizzly: message version 1.6.
  • havana: message version 1.24.
  • icehouse: message version 1.27.
  • juno: message version 1.29.
  • kilo: message version 1.34.
  • liberty: message version 1.37.

Services which consume this:

  • nova-cells

Related options:

  • None
cert = None

(String) Specifies the maximum version for messages sent from cert services. This should be the minimum value that is supported by all of the deployed cert services.

Possible values:

Any valid OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’. Alternatively, it can be any string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’.

Services which consume this:

  • nova-cert

Related options:

  • None
compute = None (String) Set a version cap for messages sent to compute services. Set this option to “auto” if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment. Otherwise, you can set this to a specific version to pin this service to messages at a particular level. All services of a single type (i.e. compute) should be configured to use the same version, and it should be set to the minimum commonly-supported version of all those services in the deployment.
conductor = None (String) Set a version cap for messages sent to conductor services
console = None (String) Set a version cap for messages sent to console services
consoleauth = None (String) Set a version cap for messages sent to consoleauth services
intercell = None

(String) Intercell version

Intercell RPC API is the client side of the Cell<->Cell RPC API. Use this option to set a version cap for messages sent between cells services.

Possible values:

  • None: This is the default value.
  • grizzly: message version 1.0.

Services which consume this:

  • nova-cells

Related options:

  • None
network = None (String) Set a version cap for messages sent to network services
scheduler = None

(String) Sets a version cap (limit) for messages sent to scheduler services. In the situation where there were multiple scheduler services running, and they were not being upgraded together, you would set this to the lowest deployed version to guarantee that other services never send messages that any of your running schedulers cannot understand.

This is rarely needed in practice as most deployments run a single scheduler. It exists mainly for design compatibility with the other services, such as compute, which are routinely upgraded in a rolling fashion.

Services that use this:

  • nova-compute, nova-conductor

Related options:

  • None
Description of VMware configuration options
Configuration option = Default value Description
[vmware]  
api_retry_count = 10 (Integer) Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc.
ca_file = None (String) Specifies the CA bundle file to be used in verifying the vCenter server certificate.
cache_prefix = None

(String) This option adds a prefix to the folder where cached images are stored

This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes.

Note: This should only be used when the compute nodes are running on same host or they have a shared file system.

Possible values:

  • Any string representing the cache prefix to the folder
cluster_name = None (String) Name of a VMware Cluster ComputeResource.
console_delay_seconds = None (Integer) Set this value if affected by an increased network latency causing repeated characters when typing in a remote console.
datastore_regex = None

(String) Regular expression pattern to match the name of datastore.

The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex=”nas.*” selects all the data stores that have a name starting with “nas”.

NOTE: If no regex is given, it just picks the datastore with the most freespace.

Possible values:

  • Any matching regular expression to a datastore must be given
host_ip = None (String) Hostname or IP address for connection to VMware vCenter host.
host_password = None (String) Password for connection to VMware vCenter host.
host_port = 443 (Port number) Port for connection to VMware vCenter host.
host_username = None (String) Username for connection to VMware vCenter host.
insecure = False

(Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification.

Related options:

  • ca_file: This option is ignored if “ca_file” is set.
integration_bridge = None

(String) This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set.

Possible values:

  • Any valid string representing the name of the integration bridge
maximum_objects = 100

(Integer) This option specifies the limit on the maximum number of objects to return in a single result.

A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests.

pbm_default_policy = None

(String) This option specifies the default policy to be used.

If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used.

Possible values:

  • Any valid storage policy such as VSAN default storage policy

Related options:

  • pbm_enabled
pbm_enabled = False

(Boolean) This option enables or disables storage policy based placement of instances.

Related options:

  • pbm_default_policy
pbm_wsdl_location = None

(String) This option specifies the PBM service WSDL file location URL.

Setting this will disable storage policy based placement of instances.

Possible values:

serial_port_proxy_uri = None

(String) Identifies a proxy service that provides network access to the serial_port_service_uri.

Possible values:

  • Any valid URI

Related options: This option is ignored if serial_port_service_uri is not specified.

  • serial_port_service_uri
serial_port_service_uri = None

(String) Identifies the remote system where the serial port traffic will be sent.

This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs.

Possible values:

  • Any valid URI
task_poll_interval = 0.5 (Floating point) Time interval in seconds to poll remote tasks invoked on VMware VC server.
use_linked_clone = True

(Boolean) This option enables/disables the use of linked clone.

The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the OpenStack Image service.

If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM.

wsdl_location = None

(String) This option specifies VIM Service WSDL Location

If vSphere API versions 5.1 and later is being used, this section can be ignored. If version is less than 5.1, WSDL files must be hosted locally and their location must be specified in the above section.

Optional over-ride to default location for bug work-arounds.

Possible values:

Description of VNC configuration options
Configuration option = Default value Description
[DEFAULT]  
daemon = False (Boolean) Run as a background process.
key = None (String) SSL key file (if separate from cert).
record = None (String) Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done.
source_is_ipv6 = False (Boolean) Set to True if source host is addressed with IPv6.
ssl_only = False (Boolean) Disallow non-encrypted connections.
web = /usr/share/spice-html5 (String) Path to directory with content which will be served by a web server.
[vmware]  
vnc_port = 5900

(Port number) This option specifies VNC starting port.

Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option ‘vnc_port’ helps you to set default starting port for the VNC client.

Possible values:

  • Any valid port number within 5900 -(5900 + vnc_port_total)

Related options: Below options should be set to enable VNC client.

  • vnc.enabled = True
  • vnc_port_total
vnc_port_total = 10000 (Integer) Total number of VNC ports.
[vnc]  
enabled = True

(Boolean) Enable VNC related features.

Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest.

keymap = en-us

(String) Keymap for VNC.

The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default.

Possible values:

  • A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an ‘IETF language tag’ (for example ‘en-us’). If you use QEMU as hypervisor, you should find the list of supported keyboard layouts at /usr/share/qemu/keymaps.
novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html

(URI) Public address of noVNC VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions.

Related options:

  • novncproxy_host
  • novncproxy_port
novncproxy_host = 0.0.0.0

(String) IP address that the noVNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the private address to which the noVNC console proxy service should bind to.

Related options:

  • novncproxy_port
  • novncproxy_base_url
novncproxy_port = 6080

(Port number) Port that the noVNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client.

This option sets the private port to which the noVNC console proxy service should bind to.

Related options:

  • novncproxy_host
  • novncproxy_base_url
vncserver_listen = 127.0.0.1 (String) The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node.
vncserver_proxyclient_address = 127.0.0.1

(String) Private, internal IP address or hostname of VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients.

This option sets the private address to which proxy clients, such as nova-xvpvncproxy, should connect to.

xvpvncproxy_base_url = http://127.0.0.1:6081/console

(URI) Public URL address of XVP VNC console proxy.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the public base URL to which client systems will connect. XVP clients can use this address to connect to the XVP instance and, by extension, the VNC sessions.

Related options:

  • xvpvncproxy_host
  • xvpvncproxy_port
xvpvncproxy_host = 0.0.0.0

(String) IP address or hostname that the XVP VNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the private address to which the XVP VNC console proxy service should bind to.

Related options:

  • xvpvncproxy_port
  • xvpvncproxy_base_url
xvpvncproxy_port = 6081

(Port number) Port that the XVP VNC console proxy should bind to.

The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based.

This option sets the private port to which the XVP VNC console proxy service should bind to.

Related options:

  • xvpvncproxy_host
  • xvpvncproxy_base_url
Description of volumes configuration options
Configuration option = Default value Description
[DEFAULT]  
block_device_allocate_retries = 60

(Integer) Number of times to retry block device allocation on failures. Starting with Liberty, Cinder can use image volume cache. This may help with block device allocation performance. Look at the cinder image_volume_cache_enabled configuration option.

Possible values:

  • 60 (default)
  • If value is 0, then one attempt is made.
  • Any negative value is treated as 0.
  • For any value > 0, total attempts are (value + 1)
block_device_allocate_retries_interval = 3 (Integer) Waiting time interval (seconds) between block device allocation retries on failures
my_block_storage_ip = $my_ip

(String) The IP address which is used to connect to the block storage network.

Possible values:

  • String with valid IP address. Default is IP address of this host.

Related options:

  • my_ip - if my_block_storage_ip is not set, then my_ip value is used.
volume_usage_poll_interval = 0 (Integer) Interval in seconds for gathering volume usages
[cinder]  
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
catalog_info = volumev2:cinderv2:publicURL

(String) Info to match when looking for cinder in the service catalog.

Possible values:

  • Format is separated values of the form: <service_type>:<service_name>:<endpoint_type>

Related options:

  • endpoint_template - Setting this option will override catalog_info
certfile = None (String) PEM encoded client certificate cert file
cross_az_attach = True

(Boolean) Allow attach between instance and volume in different availability zones.

If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not “volume” because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or allow_availability_zone_fallback=False in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach.

endpoint_template = None

(String) If this option is set then it will override service catalog lookup with this template for cinder endpoint

Possible values:

Related options:

  • catalog_info - If endpoint_template is not set, catalog_info will be used.
http_retries = 3

(Integer) Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4.

Possible values:

  • Any integer value. 0 means connection is attempted only once
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
os_region_name = None

(String) Region name of this node. This is used when picking the URL in the service catalog.

Possible values:

  • Any string representing region name
timeout = None (Integer) Timeout value for http requests
[hyperv]  
force_volumeutils_v1 = False (Boolean) DEPRECATED: Force V1 volume utility class
volume_attach_retry_count = 10

(Integer) Volume attach retry count

The number of times to retry to attach a volume. This option is used to avoid incorrectly returned no data when the system is under load. Volume attachment is retried until success or the given retry count is reached. To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and started automatically.

Possible values:

  • Positive integer values (Default: 10).

Related options:

  • Time interval between attachment attempts is declared with volume_attach_retry_interval option.
volume_attach_retry_interval = 5

(Integer) Volume attach retry interval

Interval between volume attachment attempts, in seconds.

Possible values:

  • Time in seconds (Default: 5).

Related options:

  • This options is meaningful when volume_attach_retry_count is greater than 1.
  • The retry loop runs with volume_attach_retry_count and volume_attach_retry_interval configuration options.
[libvirt]  
glusterfs_mount_point_base = $state_path/mnt (String) Directory where the glusterfs volume is mounted on the compute node
nfs_mount_options = None (String) Mount options passed to the NFS client. See section of the nfs man page for details
nfs_mount_point_base = $state_path/mnt (String) Directory where the NFS volume is mounted on the compute node
num_aoe_discover_tries = 3 (Integer) Number of times to rediscover AoE target to find volume
num_iscsi_scan_tries = 5 (Integer) Number of times to rescan iSCSI target to find volume
num_iser_scan_tries = 5 (Integer) Number of times to rescan iSER target to find volume
qemu_allowed_storage_drivers = (List) Protocols listed here will be accessed directly from QEMU. Currently supported protocols: [gluster]
rbd_secret_uuid = None (String) The libvirt UUID of the secret for the rbd_uservolumes
rbd_user = None (String) The RADOS client name for accessing rbd volumes
scality_sofs_config = None (String) Path or URL to Scality SOFS configuration file
scality_sofs_mount_point = $state_path/scality (String) Base dir where Scality SOFS shall be mounted
smbfs_mount_options = (String) Mount options passed to the SMBFS client. See mount.cifs man page for details. Note that the libvirt-qemu uid and gid must be specified.
smbfs_mount_point_base = $state_path/mnt (String) Directory where the SMBFS shares are mounted on the compute node
[xenserver]  
block_device_creation_timeout = 10 (Integer) Time in secs to wait for a block device to be created
Description of VPN configuration options
Configuration option = Default value Description
[DEFAULT]  
dmz_cidr =

(List) This option is a list of zero or more IP address ranges in your network’s DMZ that should be accepted.

Possible values:

A list of strings, each of which should be a valid CIDR.
vpn_ip = $my_ip

(String) This is the public IP address for the cloudpipe VPN servers. It defaults to the IP address of the host.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’.

Possible values:

Any valid IP address. The default is $my_ip, the IP address of the VM.

Related options:

network_manager, use_neutron, vpn_start
vpn_start = 1000

(Port number) This is the port number to use as the first VPN port for private networks.

Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’, or if you specify a value the ‘vpn_start’ parameter when creating a network.

Possible values:

Any integer representing a valid port number. The default is 1000.

Related options:

use_neutron, vpn_ip, network_manager
Description of WSGI configuration options
Configuration option = Default value Description
[wsgi]  
api_paste_config = api-paste.ini

(String) This option represents a file name for the paste.deploy config for nova-api.

Possible values: * A string representing file name for the paste.deploy config.

client_socket_timeout = 900 (Integer) This option specifies the timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0.
default_pool_size = 1000 (Integer) This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option.
keep_alive = True

(Boolean) This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse.

Possible values:

  • True : reuse HTTP connection. * False : closes the client socket connection explicitly.

Related options:

  • tcp_keepidle
max_header_line = 16384

(Integer) This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).

Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length.

secure_proxy_ssl_header = None

(String) This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy.

Possible values:

  • None (default) - the request scheme is not influenced by any HTTP headers. * Valid HTTP header, like HTTP_X_FORWARDED_PROTO
ssl_ca_file = None

(String) This option allows setting path to the CA certificate file that should be used to verify connecting clients.

Possible values:

  • String representing path to the CA certificate file.

Related options:

  • enabled_ssl_apis
ssl_cert_file = None

(String) This option allows setting path to the SSL certificate of API server.

Possible values:

  • String representing path to the SSL certificate.

Related options:

  • enabled_ssl_apis
ssl_key_file = None

(String) This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect.

Possible values:

  • String representing path to the SSL private key.

Related options:

  • enabled_ssl_apis
tcp_keepidle = 600

(Integer) This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X.

Related options:

  • keep_alive
wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f

(String) It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.

This option is used for building custom request loglines.

Possible values:

  • ‘%(client_ip)s “%(request_line)s” status: %(status_code)s’ ‘len: %(body_length)s time: %(wall_seconds).7f’ (default) * Any formatted string formed by specific values.
Description of Xen configuration options
Configuration option = Default value Description
[DEFAULT]  
console_driver = nova.console.xvp.XVPConsoleProxy

(String) Nova-console proxy is used to set up multi-tenant VM console access. This option allows pluggable driver program for the console session and represents driver to use for the console proxy.

Possible values

  • ‘nova.console.xvp.XVPConsoleProxy’ (default) or a string representing fully classified class name of console driver.
[libvirt]  
xen_hvmloader_path = /usr/lib/xen/boot/hvmloader (String) Location where the Xen hvmloader is kept
[xenserver]  
agent_path = usr/sbin/xe-update-networking

(String) Path to locate guest agent on the server.

Specifies the path in which the XenAPI guest agent should be located. If the agent is present, network configuration is not injected into the image.

Related options: For this option to have an effect: * flat_injected should be set to True * compute_driver should be set to xenapi.XenAPIDriver

agent_resetnetwork_timeout = 60

(Integer) Number of seconds to wait for agent’s reply to resetnetwork request.

This indicates the amount of time xapi ‘agent’ plugin waits for the agent to respond to the ‘resetnetwork’ request specifically. The generic timeout for agent communication agent_timeout is ignored in this case.

agent_timeout = 30

(Integer) Number of seconds to wait for agent’s reply to a request.

Nova configures/performs certain administrative actions on a server with the help of an agent that’s installed on the server. The communication between Nova and the agent is achieved via sharing messages, called records, over xenstore, a shared storage across all the domains on a Xenserver host. Operations performed by the agent on behalf of nova are: ‘version’,’ key_init’, ‘password’,’resetnetwork’,’inject_file’, and ‘agentupdate’.

To perform one of the above operations, the xapi ‘agent’ plugin writes the command and its associated parameters to a certain location known to the domain and awaits response. On being notified of the message, the agent performs appropriate actions on the server and writes the result back to xenstore. This result is then read by the xapi ‘agent’ plugin to determine the success/failure of the operation.

This config option determines how long the xapi ‘agent’ plugin shall wait to read the response off of xenstore for a given request/command. If the agent on the instance fails to write the result in this time period, the operation is considered to have timed out.

Related options: * agent_version_timeout * agent_resetnetwork_timeout

agent_version_timeout = 300

(Integer) Number of seconds to wait for agent’t reply to version request.

This indicates the amount of time xapi ‘agent’ plugin waits for the agent to respond to the ‘version’ request specifically. The generic timeout for agent communication agent_timeout is ignored in this case.

During the build process the ‘version’ request is used to determine if the agent is available/operational to perform other requests such as ‘resetnetwork’, ‘password’, ‘key_init’ and ‘inject_file’. If the ‘version’ call fails, the other configuration is skipped. So, this configuration option can also be interpreted as time in which agent is expected to be fully operational.

cache_images = all

(String) Cache glance images locally.

The value for this option must be choosen from the choices listed here. Configuring a value other than these will default to ‘all’.

Note: There is nothing that deletes these images.

Possible values:

  • all: will cache all images.
  • some: will only cache images that have the image_property cache_in_nova=True.
  • none: turns off caching entirely.
check_host = True

(Boolean) Ensure compute service is running on host XenAPI connects to. This option must be set to false if the ‘independent_compute’ option is set to true.

Possible values:

  • Setting this option to true will make sure that compute service is running on the same host that is specified by connection_url.
  • Setting this option to false, doesn’t perform the check.

Related options:

  • independent_compute
connection_concurrent = 5 (Integer) Maximum number of concurrent XenAPI connections. Used only if compute_driver=xenapi.XenAPIDriver
connection_password = None (String) Password for connection to XenServer/Xen Cloud Platform
connection_url = None

(String) URL for connection to XenServer/Xen Cloud Platform. A special value of unix://local can be used to connect to the local unix socket.

Possible values:

  • Any string that represents a URL. The connection_url is generally the management network IP address of the XenServer.
  • This option must be set if you chose the XenServer driver.
connection_username = root (String) Username for connection to XenServer/Xen Cloud Platform
default_os_type = linux (String) Default OS type used when uploading an image to glance
disable_agent = False

(Boolean) Disables the use of XenAPI agent.

This configuration option suggests whether the use of agent should be enabled or not regardless of what image properties are present. Image properties have an effect only when this is set to True. Read description of config option use_agent_default for more information.

Related options: * use_agent_default

image_compression_level = None

(Integer) Compression level for images.

By setting this option we can configure the gzip compression level. This option sets GZIP environment variable before spawning tar -cz to force the compression level. It defaults to none, which means the GZIP environment variable is not set and the default (usually -6) is used.

Possible values:

  • Range is 1-9, e.g., 9 for gzip -9, 9 being most compressed but most CPU intensive on dom0.
  • Any values out of this range will default to None.
image_upload_handler = nova.virt.xenapi.image.glance.GlanceStore (String) Dom0 plugin driver used to handle image uploads.
independent_compute = False

(Boolean) Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host.

Related options:

  • CONF.flat_injected (Must be False)
  • CONF.xenserver.check_host (Must be False)
  • CONF.default_ephemeral_format (Must be unset or ‘ext3’)
  • Joining host aggregates (will error if attempted)
  • Swap disks for Windows VMs (will error if attempted)
  • Nova-based auto_configure_disk (will error if attempted)
introduce_vdi_retry_wait = 20

(Integer) Number of seconds to wait for SR to settle if the VDI does not exist when first introduced.

Some SRs, particularly iSCSI connections are slow to see the VDIs right after they got introduced. Setting this option to a time interval will make the SR to wait for that time period before raising VDI not found exception.

ipxe_boot_menu_url = None

(String) URL to the iPXE boot menu.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

By default this option is not set. Enable this option to boot an iPXE ISO.

Related Options:

  • ipxe_network_name
  • ipxe_mkisofs_cmd
ipxe_mkisofs_cmd = mkisofs

(String) Name and optionally path of the tool used for ISO image creation.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

Note: By default mkisofs is not present in the Dom0, so the package can either be manually added to Dom0 or include the mkisofs binary in the image itself.

Related Options:

  • ipxe_network_name
  • ipxe_boot_menu_url
ipxe_network_name = None

(String) Name of network to use for booting iPXE ISOs.

An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image.

By default this option is not set. Enable this option to boot an iPXE ISO.

Related Options:

  • ipxe_boot_menu_url
  • ipxe_mkisofs_cmd
login_timeout = 10 (Integer) Timeout in seconds for XenAPI login.
max_kernel_ramdisk_size = 16777216

(Integer) Maximum size in bytes of kernel or ramdisk images.

Specifying the maximum size of kernel or ramdisk will avoid copying large files to dom0 and fill up /boot/guest.

num_vbd_unplug_retries = 10 (Integer) Maximum number of retries to unplug VBD. If set to 0, should try once, no retries.
ovs_integration_bridge = xapi1

(String) The name of the integration Bridge that is used with xenapi when connecting with Open vSwitch.

Note: The value of this config option is dependent on the environment, therefore this configuration value must be set accordingly if you are using XenAPI.

Possible options:

  • Any string that represents a bridge name(default is xapi1).
remap_vbd_dev = False (Boolean) Used to enable the remapping of VBD dev. (Works around an issue in Ubuntu Maverick)
remap_vbd_dev_prefix = sd

(String) Specify prefix to remap VBD dev to (ex. /dev/xvdb -> /dev/sdb).

Related options:

  • If remap_vbd_dev is set to False this option has no impact.
running_timeout = 60 (Integer) Number of seconds to wait for instance to go to running state
sparse_copy = True (Boolean) Whether to use sparse_copy for copying data on a resize down. (False will use standard dd). This speeds up resizes down considerably since large runs of zeros won’t have to be rsynced.
sr_base_path = /var/run/sr-mount (String) Base path to the storage repository on the XenServer host.
sr_matching_filter = default-sr:true

(String) Filter for finding the SR to be used to install guest instances on.

Possible values:

  • To use the Local Storage in default XenServer/XCP installations set this flag to other-config:i18n-key=local-storage.
  • To select an SR with a different matching criteria, you could set it to other-config:my_favorite_sr=true.
  • To fall back on the Default SR, as displayed by XenCenter, set this flag to: default-sr:true.
target_host = None

(String) The iSCSI Target Host.

This option represents the hostname or ip of the iSCSI Target. If the target host is not present in the connection information from the volume provider then the value from this option is taken.

Possible values:

  • Any string that represents hostname/ip of Target.
target_port = 3260

(String) The iSCSI Target Port.

This option represents the port of the iSCSI Target. If the target port is not present in the connection information from the volume provider then the value from this option is taken.

torrent_base_url = None (String) Base URL for torrent files; must contain a slash character (see RFC 1808, step 6)
torrent_download_stall_cutoff = 600 (Integer) Number of seconds a download can remain at the same progress percentage w/o being considered a stall
torrent_images = none

(String) Whether or not to download images via Bit Torrent.

The value for this option must be choosen from the choices listed here. Configuring a value other than these will default to ‘none’.

Possible values:

  • all: will download all images.
  • some: will only download images that have the image_property bittorrent=true.
  • none: will turnoff downloading images via Bit Torrent.
torrent_listen_port_end = 6891 (Port number) End of port range to listen on
torrent_listen_port_start = 6881 (Port number) Beginning of port range to listen on
torrent_max_last_accessed = 86400 (Integer) Cached torrent files not accessed within this number of seconds can be reaped
torrent_max_seeder_processes_per_host = 1 (Integer) Maximum number of seeder processes to run concurrently within a given dom0. (-1 = no limit)
torrent_seed_chance = 1.0 (Floating point) Probability that peer will become a seeder. (1.0 = 100%)
torrent_seed_duration = 3600 (Integer) Number of seconds after downloading an image via BitTorrent that it should be seeded for other peers.
use_agent_default = False

(Boolean) Whether or not to use the agent by default when its usage is enabled but not indicated by the image.

The use of XenAPI agent can be disabled altogether using the configuration option disable_agent. However, if it is not disabled, the use of an agent can still be controlled by the image in use through one of its properties, xenapi_use_agent. If this property is either not present or specified incorrectly on the image, the use of agent is determined by this configuration option.

Note that if this configuration is set to True when the agent is not present, the boot times will increase significantly.

Related options: * disable_agent

use_join_force = True

(Boolean) When adding new host to a pool, this will append a –force flag to the command, forcing hosts to join a pool, even if they have different CPUs.

Since XenServer version 5.6 it is possible to create a pool of hosts that have different CPU capabilities. To accommodate CPU differences, XenServer limited features it uses to determine CPU compatibility to only the ones that are exposed by CPU and support for CPU masking was added. Despite this effort to level differences between CPUs, it is still possible that adding new host will fail, thus option to force join was introduced.

vhd_coalesce_max_attempts = 20

(Integer) Max number of times to poll for VHD to coalesce.

This option determines the maximum number of attempts that can be made for coalescing the VHD before giving up.

Related opitons:

  • vhd_coalesce_poll_interval
vhd_coalesce_poll_interval = 5.0

(Floating point) The interval used for polling of coalescing vhds.

This is the interval after which the task of coalesce VHD is performed, until it reaches the max attempts that is set by vhd_coalesce_max_attempts.

Related options:

  • vhd_coalesce_max_attempts
vif_driver = nova.virt.xenapi.vif.XenAPIBridgeDriver (String) The XenAPI VIF driver using XenServer Network APIs.
[xvp]  
console_xvp_conf = /etc/xvp.conf (String) Generated XVP conf file
console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template (String) XVP conf template
console_xvp_log = /var/log/xvp.log (String) XVP log file
console_xvp_multiplex_port = 5900 (Port number) Port for XVP to multiplex VNC connections on
console_xvp_pid = /var/run/xvp.pid (String) XVP master process pid file

New, updated, and deprecated options in Newton for Compute

New options
Option = default value (Type) Help string
[DEFAULT] pointer_model = usbtablet (String) Generic property to specify the pointer type.
[DEFAULT] sync_power_state_pool_size = 1000 (Integer) Number of greenthreads available for use to sync power states.
[DEFAULT] vendordata_dynamic_connect_timeout = 5 (Integer) Maximum wait time for an external REST service to connect.
[DEFAULT] vendordata_dynamic_read_timeout = 5 (Integer) Maximum wait time for an external REST service to return data once connected.
[DEFAULT] vendordata_dynamic_ssl_certfile = (String) Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against.
[DEFAULT] vendordata_dynamic_targets = (List) A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>.
[DEFAULT] vendordata_providers = (List) A list of vendordata providers.
[barbican] auth_endpoint = http://localhost:5000/v3 (String) Use this endpoint to connect to Keystone
[barbican] barbican_api_version = None (String) Version of the Barbican API, for example: “v1”
[barbican] barbican_endpoint = None (String) Use this endpoint to connect to Barbican, for example: “http://localhost:9311/
[barbican] number_of_retries = 60 (Integer) Number of times to retry poll for key creation completion
[barbican] retry_delay = 1 (Integer) Number of seconds to wait before retrying poll for key creation completion
[cloudpipe] boot_script_template = $pybasedir/nova/cloudpipe/bootscript.template (String) Template for cloudpipe instance boot script.
[cloudpipe] dmz_mask = 255.255.255.0 (Unknown) Netmask to push into OpenVPN config.
[cloudpipe] dmz_net = 10.0.0.0 (Unknown) Network to push into OpenVPN config.
[cloudpipe] vpn_flavor = m1.tiny (String) Flavor for VPN instances.
[cloudpipe] vpn_image_id = 0 (String) Image ID used when starting up a cloudpipe VPN client.
[cloudpipe] vpn_key_suffix = -vpn (String) Suffix to add to project name for VPN key and secgroups
[crypto] ca_file = cacert.pem (String) Filename of root CA (Certificate Authority). This is a container format and includes root certificates.
[crypto] ca_path = $state_path/CA (String) Directory path where root CA is located.
[crypto] crl_file = crl.pem (String) Filename of root Certificate Revocation List (CRL). This is a list of certificates that have been revoked, and therefore, entities presenting those (revoked) certificates should no longer be trusted.
[crypto] key_file = private/cakey.pem (String) Filename of a private key.
[crypto] keys_path = $state_path/keys (String) Directory path where keys are located.
[crypto] project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s (String) Subject for certificate for projects, %s for project, timestamp
[crypto] use_project_ca = False (Boolean) Option to enable/disable use of CA for each project.
[crypto] user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s (String) Subject for certificate for users, %s for project, user, timestamp
[glance] debug = False (Boolean) Enable or disable debug logging with glanceclient.
[glance] use_glance_v1 = False (Boolean) DEPRECATED: This flag allows reverting to glance v1 if for some reason glance v2 doesn’t work in your environment. This will only exist in Newton, and a fully working Glance v2 will be a hard requirement in Ocata.
[hyperv] enable_remotefx = False (Boolean) Enable RemoteFX feature
[ironic] auth_section = None (Unknown) Config Section from which to load plugin specific options
[ironic] auth_type = None (Unknown) Authentication type to load
[ironic] certfile = None (String) PEM encoded client certificate cert file
[ironic] insecure = False (Boolean) Verify HTTPS connections.
[ironic]keyfile = None (String) PEM encoded client certificate key file
[ironic] timeout = None (Integer) Timeout value for http requests
[key_manager] api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager (String) The full class name of the key manager API class
[key_manager] fixed_key = None (String) Fixed key returned by key manager, specified in hex.
[libvirt] enabled_perf_events = (String) Override the default disk prefix for the devices attached to an instance.
[libvirt] vzstorage_cache_path = None (String) Path to the SSD cache file.
[libvirt] vzstorage_log_path = /var/log/pstorage/%(cluster_name)s/nova.log.gz (String) Path to vzstorage client log.
[libvirt] vzstorage_mount_group = qemu (String) Mount owner group name.
[libvirt] vzstorage_mount_opts = (List) Extra mount options for pstorage-mount
[libvirt] vzstorage_mount_perms = 0770 (String) Mount access mode.
[libvirt] vzstorage_mount_point_base = $state_path/mnt (String) Directory where the Virtuozzo Storage clusters are mounted on the compute node.
[libvirt] vzstorage_mount_user = stack (String) Mount owner user name.
[os_vif_linux_bridge] flat_interface = None (String) FlatDhcp will bridge into this interface if set
[os_vif_linux_bridge] forward_bridge_interface = ['all'] (Multi-valued) An interface that bridges can forward to. If this is set to all then all traffic will be forwarded. Can be specified multiple times.
[os_vif_linux_bridge] iptables_bottom_regex = (String) Regular expression to match the iptables rule that should always be on the bottom.
[os_vif_linux_bridge] iptables_drop_action = DROP (String) The table that iptables to jump to when a packet is to be dropped.
[os_vif_linux_bridge] iptables_top_regex = (String) Regular expression to match the iptables rule that should always be on the top.
[os_vif_linux_bridge] network_device_mtu = 1500 (Integer) MTU setting for network interface.
[os_vif_linux_bridge] use_ipv6 = False (Boolean) Use IPv6
[os_vif_linux_bridge] vlan_interface = None (String) VLANs will bridge into this interface if set
[os_vif_ovs] network_device_mtu = 1500 (Integer) MTU setting for network interface.
[os_vif_ovs] ovs_vsctl_timeout = 120 (Integer) Amount of time, in seconds, that ovs_vsctl should wait for a response from the database. 0 is to wait forever.
[remote_debug] host = None (String) Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host.
[remote_debug] port = None (Port number) Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host.
[vif_plug_linux_bridge_privileged] capabilities = [] (Unknown) List of Linux capabilities retained by the privsep daemon.
[vif_plug_linux_bridge_privileged] group = None (String) Group that the privsep daemon should run as.
[vif_plug_linux_bridge_privileged] helper_command = None (String) Command to invoke to start the privsep daemon if not using the “fork” method.
[vif_plug_linux_bridge_privileged] user = None (String) User that the privsep daemon should run as.
[vif_plug_ovs_privileged] capabilities = [] (Unknown) List of Linux capabilities retained by the privsep daemon.
[vif_plug_ovs_privileged] group = None (String) Group that the privsep daemon should run as.
[vif_plug_ovs_privileged] helper_command = None (String) Command to invoke to start the privsep daemon if not using the “fork” method.
[vif_plug_ovs_privileged] user = None (String) User that the privsep daemon should run as.
[wsgi] api_paste_config = api-paste.ini (String) This option represents a file name for the paste.deploy config for nova-api.
[wsgi] client_socket_timeout = 900 (Integer) This option specifies the timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0.
[wsgi] default_pool_size = 1000 (Integer) This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option.
[wsgi] keep_alive = True (Boolean) This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse.
[wsgi] max_header_line = 16384 (Integer) This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
[wsgi] secure_proxy_ssl_header = None (String) This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy.
[wsgi] ssl_ca_file = None (String) This option allows setting path to the CA certificate file that should be used to verify connecting clients.
[wsgi] ssl_cert_file = None (String) This option allows setting path to the SSL certificate of API server.
[wsgi] ssl_key_file = None (String) This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect.
[wsgi] tcp_keepidle = 600 (Integer) This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X.
[wsgi] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f (String) It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
[xenserver] independent_compute = False (Boolean) Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host.
[xvp] console_xvp_conf = /etc/xvp.conf (String) Generated XVP conf file
[xvp] console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template (String) XVP conf template
[xvp] console_xvp_log = /var/log/xvp.log (String) XVP log file
[xvp] console_xvp_multiplex_port = 5900 (Port number) Port for XVP to multiplex VNC connections on
[xvp] console_xvp_pid = /var/run/xvp.pid (String) XVP master process pid file
New default values
Option Previous default value New default value
[ironic] api_endpoint None http://ironic.example.org:6385/
[neutron] region_name None RegionOne
Deprecated options
Deprecated option New Option
[DEFAULT] cert_manager None
[DEFAULT] cert_topic None
[DEFAULT] compute_available_monitors None
[DEFAULT] compute_manager None
[DEFAULT] compute_stats_class None
[DEFAULT] console_manager None
[DEFAULT] consoleauth_manager None
[DEFAULT] default_flavor None
[DEFAULT] driver None
[DEFAULT] enable_network_quota None
[DEFAULT] fatal_exception_format_errors None
[DEFAULT] image_decryption_dir None
[DEFAULT] manager None
[DEFAULT] metadata_manager None
[DEFAULT] quota_driver None
[DEFAULT] quota_networks None
[DEFAULT] s3_access_key None
[DEFAULT] s3_affix_tenant None
[DEFAULT] s3_host None
[DEFAULT] s3_port None
[DEFAULT] s3_secret_key None
[DEFAULT] s3_use_ssl None
[DEFAULT] scheduler_manager None
[DEFAULT] secure_proxy_ssl_header None
[DEFAULT] share_dhcp_address None
[DEFAULT] snapshot_name_template None
[DEFAULT] use_local None
[DEFAULT] vendordata_driver None
[barbican] catalog_info None
[barbican] endpoint_template None
[barbican] os_region_name None
[glance] admin_password None
[glance] filesystems None
[glance] use_glance_v1 None
[hyperv] force_volumeutils_v1 None
[ironic] admin_tenant_name None
[ironic] admin_url None
[ironic] admin_username None
[libvirt] checksum_base_images None
[libvirt] checksum_interval_seconds None
[libvirt] image_info_filename_pattern None
[libvirt] use_usb_tablet None
[matchmaker_redis] host None
[matchmaker_redis] password None
[matchmaker_redis] port None
[matchmaker_redis] sentinel_hosts None
[osapi_v21] extensions_blacklist None
[osapi_v21] extensions_whitelist None
[osapi_v21] project_id_regex None

A list of config options based on different topics can be found below:

Overview of nova.conf

The nova.conf configuration file is an INI file format as explained in Configuration file format.

You can use a particular configuration option file by using the option (nova.conf) parameter when you run one of the nova-* services. This parameter inserts configuration option definitions from the specified configuration file name, which might be useful for debugging or performance tuning.

For a list of configuration options, see the tables in this guide.

To learn more about the nova.conf configuration file, review the general purpose configuration options documented in the table Description of common configuration options.

Important

Do not specify quotes around nova options.

Sections

Configuration options are grouped by section. The Compute configuration file supports the following sections:

[DEFAULT]
Contains most configuration options. If the documentation for a configuration option does not specify its section, assume that it appears in this section.
[baremetal]
Configures the baremetal hypervisor driver.
[cells]
Configures cells functionality. For details, the section called “Cells”.
[conductor]
Configures the nova-conductor service.
[database]
Configures the database that Compute uses.
[glance]
Configures how to access the Image service.
[hyperv]
Configures the Hyper-V hypervisor driver.
[image_file_url]
Configures additional filesystems to access the Image service.
[keymgr]
Configures the key manager.
[keystone_authtoken]
Configures authorization via Identity service.
[libvirt]
Configures the hypervisor drivers using the Libvirt library: KVM, LXC, Qemu, UML, Xen.
[matchmaker_redis]
Configures a Redis server.
[matchmaker_ring]
Configures a matchmaker ring.
[metrics]
Configures weights for the metrics weigher.
[neutron]
Configures Networking specific options.
[osapi_v3]
Configures the OpenStack Compute API v3.
[rdp]
Configures RDP proxying.
[serial_console]
Configures serial console.
[spice]
Configures virtual consoles using SPICE.
[ssl]
Configures certificate authority using SSL.
[trusted_computing]
Configures the trusted computing pools functionality and how to connect to a remote attestation service.
[upgrade_levels]
Configures version locking on the RPC (message queue) communications between the various Compute services to allow live upgrading an OpenStack installation.
[vmware]
Configures the VMware hypervisor driver.
[xenserver]
Configures the XenServer hypervisor driver.
[zookeeper]
Configures the ZooKeeper ServiceGroup driver.

Compute API configuration

The Compute API, run by the nova-api daemon, is the component of OpenStack Compute that receives and responds to user requests, whether they be direct API calls, or via the CLI tools or dashboard.

Configure Compute API password handling

The OpenStack Compute API enables users to specify an administrative password when they create or rebuild a server instance. If the user does not specify a password, a random password is generated and returned in the API response.

In practice, how the admin password is handled depends on the hypervisor in use and might require additional configuration of the instance. For example, you might have to install an agent to handle the password setting. If the hypervisor and instance configuration do not support setting a password at server create time, the password that is returned by the create API call is misleading because it was ignored.

To prevent this confusion, use the enable_instance_password configuration option to disable the return of the admin password for installations that do not support setting instance passwords.

Configuration options

The Compute API configuration options are documented in the tables below.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
enable_new_services = True

(Boolean) Enable new services on this host automatically.

When a new service (for example “nova-compute”) starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new services in disabled state and then enabled them at a later point in time. This option can set this behavior for all services per host.

Possible values:

  • True: Each new service is enabled as soon as it registers itself.
  • False: Services must be enabled via a REST API call or with the CLI with nova service-enable <hostname> <binary>, otherwise they are not ready to use.
enabled_apis = osapi_compute, metadata (List) A list of APIs to enable by default
enabled_ssl_apis = (List) A list of APIs with enabled SSL
instance_name_template = instance-%08x

(String) Template string to be used to generate instance names.

This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like instance-%(uuid)s. If you already have instances in your deployment when you change this, your deployment will break.

Possible values:

  • A string which either uses the instance database ID (like the default)
  • A string with a list of named database columns, for example %(id)d or %(uuid)s or %(hostname)s.

Related options:

  • not to be confused with: multi_instance_display_name_template
multi_instance_display_name_template = %(name)s-%(count)d

(String) When creating multiple instances with a single request using the os-multiple-create API extension, this template will be used to build the display name for each instance. The benefit is that the instances end up with different hostnames. Example display names when creating two VM’s: name-1, name-2.

Possible values:

  • Valid keys for the template are: name, uuid, count.
non_inheritable_image_properties = cache_in_nova, bittorrent

(List) Image properties that should not be inherited from the instance when taking a snapshot.

This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots.

Possible values:

  • A list whose item is an image property. Usually only the image properties that are only needed by base images can be included here, since the snapshots that are created from the base images doesn’t need them.
  • Default list: [‘cache_in_nova’, ‘bittorrent’]
null_kernel = nokernel (String) This option is used to decide when an image should have no external ramdisk or kernel. By default this is set to ‘nokernel’, so when an image is booted with the property ‘kernel_id’ with the value ‘nokernel’, Nova assumes the image doesn’t require an external kernel and ramdisk.
osapi_compute_link_prefix = None

(String) This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged.

Possible values:

  • Any string, including an empty string (the default).
osapi_compute_listen = 0.0.0.0 (String) The IP address on which the OpenStack API will listen.
osapi_compute_listen_port = 8774 (Port number) The port on which the OpenStack API will listen.
osapi_compute_workers = None (Integer) Number of workers for OpenStack API service. The default will be the number of CPUs available.
osapi_hide_server_address_states = building

(List) This option is a list of all instance states for which network address information should not be returned from the API.

Possible values:

A list of strings, where each string is a valid VM state, as defined in nova/compute/vm_states.py. As of the Newton release, they are:
  • “active”
  • “building”
  • “paused”
  • “suspended”
  • “stopped”
  • “rescued”
  • “resized”
  • “soft-delete”
  • “deleted”
  • “error”
  • “shelved”
  • “shelved_offloaded”
servicegroup_driver = db

(String) This option specifies the driver to be used for the servicegroup service.

ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver.

Possible Values:

  • db : Database ServiceGroup driver * mc : Memcache ServiceGroup driver

Related Options:

  • service_down_time (maximum time since last check-in for up service)
snapshot_name_template = snapshot-%s (String) DEPRECATED: Template string to be used to generate snapshot names This is not used anymore and will be removed in the O release.
use_forwarded_for = False

(Boolean) When True, the ‘X-Forwarded-For’ header is treated as the canonical remote address. When False (the default), the ‘remote_address’ header is used.

You should only enable this if you have an HTML sanitizing proxy.

[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_versionedobjects]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal
Description of API v2.1 configuration options
Configuration option = Default value Description
[osapi_v21]  
extensions_blacklist =

(List) DEPRECATED: This option is a list of all of the v2.1 API extensions to never load. However, it will be removed in the near future, after which all the functionality that was previously in extensions will be part of the standard API, and thus always accessible.

Possible values:

  • A list of strings, each being the alias of an extension that you do not wish to load.

Related options:

  • enabled
  • extensions_whitelist
extensions_whitelist =

(List) DEPRECATED: This is a list of extensions. If it is empty, then all extensions except those specified in the extensions_blacklist option will be loaded. If it is not empty, then only those extensions in this list will be loaded, provided that they are also not in the extensions_blacklist option. Once this deprecated option is removed, after which the all the functionality that was previously in extensions will be part of the standard API, and thus always accessible.

Possible values:

  • A list of strings, each being the alias of an extension that you wish to load, or an empty list, which indicates that all extensions are to be run.

Related options:

  • enabled
  • extensions_blacklist
project_id_regex = None

(String) DEPRECATED: This option is a string representing a regular expression (regex) that matches the project_id as contained in URLs. If not set, it will match normal UUIDs created by keystone.

Possible values:

  • A string representing any legal regular expression
Description of CA and SSL configuration options
Configuration option = Default value Description
[DEFAULT]  
cert = self.pem (String) Path to SSL certificate file.
cert_manager = nova.cert.manager.CertManager (String) DEPRECATED: Full class name for the Manager for cert
cert_topic = cert (String) DEPRECATED: Determines the RPC topic that the cert nodes listen on. For most deployments there is no need to ever change it. Since the nova-cert service is marked for deprecation, the feature to change RPC topic that cert nodes listen may be removed as early as the 15.0.0

Configure resize

Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale according to user needs. For this feature to work properly, you might need to configure some underlying virt layers.

KVM

Resize on KVM is implemented currently by transferring the images between compute nodes over ssh. For KVM you need hostnames to resolve properly and passwordless ssh access between your compute hosts. Direct access from one compute host to another is needed to copy the VM file across.

Cloud end users can find out how to resize a server by reading the OpenStack End User Guide.

XenServer

To get resize to work with XenServer (and XCP), you need to establish a root trust between all hypervisor nodes and provide an /image mount point to your hypervisors dom0.

Database configuration

You can configure OpenStack Compute to use any SQLAlchemy-compatible database. The database name is nova. The nova-conductor service is the only service that writes to the database. The other Compute services access the database through the nova-conductor service.

To ensure that the database schema is current, run the following command:

# nova-manage db sync

If nova-conductor is not used, entries to the database are mostly written by the nova-scheduler service, although all services must be able to update entries in the database.

In either case, use the configuration option settings documented in Database configurations to configure the connection string for the nova database.

Fibre Channel support in Compute

Fibre Channel support in OpenStack Compute is remote block storage attached to compute nodes for VMs.

Fibre Channel supported only the KVM hypervisor.

Compute and Block Storage support Fibre Channel automatic zoning on Brocade and Cisco switches. On other hardware Fibre Channel arrays must be pre-zoned or directly attached to the KVM hosts.

KVM host requirements

You must install these packages on the KVM host:

  • sysfsutils - Nova uses the systool application in this package.
  • sg3-utils or sg3_utils - Nova uses the sg_scan and sginfo applications.

Installing the multipath-tools or device-mapper-multipath package is optional.

iSCSI interface and offload support in Compute

Note

iSCSI interface and offload support is only present since Kilo.

Compute supports open-iscsi iSCSI interfaces for offload cards. Offload hardware must be present and configured on every compute node where offload is desired. Once an open-iscsi interface is configured, the iface name (iface.iscsi_ifacename) should be passed to libvirt via the iscsi_iface parameter for use. All iSCSI sessions will be bound to this iSCSI interface.

Currently supported transports (iface.transport_name) are be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs. Configuration changes are required on the compute node only.

iSER is supported using the separate iSER LibvirtISERVolumeDriver and will be rejected if used via the iscsi_iface parameter.

iSCSI iface configuration
  • Note the distinction between the transport name (iface.transport_name) and iface name (iface.iscsi_ifacename). The actual iface name must be specified via the iscsi_iface parameter to libvirt for offload to work.

  • The default name for an iSCSI iface (open-iscsi parameter iface.iscsi_ifacename) is in the format transport_name.hwaddress when generated by iscsiadm.

  • iscsiadm can be used to view and generate current iface configuration. Every network interface that supports an open-iscsi transport can have one or more iscsi ifaces associated with it. If no ifaces have been configured for a network interface supported by an open-iscsi transport, this command will create a default iface configuration for that network interface. For example :

    # iscsiadm -m iface
    default tcp,<empty>,<empty>,<empty>,<empty>
    iser iser,<empty>,<empty>,<empty>,<empty>
    bnx2i.00:05:b5:d2:a0:c2 bnx2i,00:05:b5:d2:a0:c2,5.10.10.20,<empty>,<empty>
    

    The output is in the format: iface_name transport_name,hwaddress,ipaddress, net_ifacename,initiatorname.

  • Individual iface configuration can be viewed via

    # iscsiadm -m iface -I IFACE_NAME
    # BEGIN RECORD 2.0-873
    iface.iscsi_ifacename = cxgb4i.00:07:43:28:b2:58
    iface.net_ifacename = <empty>
    iface.ipaddress = 102.50.50.80
    iface.hwaddress = 00:07:43:28:b2:58
    iface.transport_name = cxgb4i
    iface.initiatorname = <empty>
    # END RECORD
    

    Configuration can be updated as desired via

    # iscsiadm -m iface-I IFACE_NAME--op=update -n iface.SETTING -v VALUE
    
  • All iface configurations need a minimum of iface.iface_name, iface.transport_name and iface.hwaddress to be correctly configured to work. Some transports may require iface.ipaddress and iface.net_ifacename as well to bind correctly.

    Detailed configuration instructions can be found at http://www.open-iscsi.org/docs/README.

Hypervisors

Hypervisor configuration basics

The node where the nova-compute service is installed and operates on the same node that runs all of the virtual machines. This is referred to as the compute node in this guide.

By default, the selected hypervisor is KVM. To change to another hypervisor, change the virt_type option in the [libvirt] section of nova.conf and restart the nova-compute service.

Here are the general nova.conf options that are used to configure the compute node’s hypervisor: Description of hypervisor configuration options

Specific options for particular hypervisors can be found in the following sections.

KVM

KVM is configured as the default hypervisor for Compute.

Note

This document contains several sections about hypervisor selection. If you are reading this document linearly, you do not want to load the KVM module before you install nova-compute. The nova-compute service depends on qemu-kvm, which installs /lib/udev/rules.d/45-qemu-kvm.rules, which sets the correct permissions on the /dev/kvm device node.

To enable KVM explicitly, add the following configuration options to the /etc/nova/nova.conf file:

compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = kvm

The KVM hypervisor supports the following virtual machine image formats:

  • Raw
  • QEMU Copy-on-write (qcow2)
  • QED Qemu Enhanced Disk
  • VMware virtual machine disk format (vmdk)

This section describes how to enable KVM on your system. For more information, see the following distribution-specific documentation:

Enable KVM

The following sections outline how to enable KVM based hardware virtualization on different architectures and platforms. To perform these steps, you must be logged in as the root user.

For x86 based systems
  1. To determine whether the svm or vmx CPU extensions are present, run this command:

    # grep -E 'svm|vmx' /proc/cpuinfo
    

    This command generates output if the CPU is capable of hardware-virtualization. Even if output is shown, you might still need to enable virtualization in the system BIOS for full support.

    If no output appears, consult your system documentation to ensure that your CPU and motherboard support hardware virtualization. Verify that any relevant hardware virtualization options are enabled in the system BIOS.

    The BIOS for each manufacturer is different. If you must enable virtualization in the BIOS, look for an option containing the words virtualization, VT, VMX, or SVM.

  2. To list the loaded kernel modules and verify that the kvm modules are loaded, run this command:

    # lsmod | grep kvm
    

    If the output includes kvm_intel or kvm_amd, the kvm hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.

    If the output does not show that the kvm module is loaded, run this command to load it:

    # modprobe -a kvm
    

    Run the command for your CPU. For Intel, run this command:

    # modprobe -a kvm-intel
    

    For AMD, run this command:

    # modprobe -a kvm-amd
    

    Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.

    If the kernel modules do not load automatically, use the procedures listed in these subsections.

If the checks indicate that required hardware virtualization support or kernel modules are disabled or unavailable, you must either enable this support on the system or find a system with this support.

Note

Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the previous command did not produce output, reboot your machine, enter the system BIOS, and enable the VT option.

If KVM acceleration is not supported, configure Compute to use a different hypervisor, such as QEMU or Xen. See QEMU or XenServer (and other XAPI based Xen variants) for details.

These procedures help you load the kernel modules for Intel-based and AMD-based processors if they do not load automatically during KVM installation.

Intel-based processors

If your compute host is Intel-based, run these commands as root to load the kernel modules:

# modprobe kvm
# modprobe kvm-intel

Add these lines to the /etc/modules file so that these modules load on reboot:

kvm
kvm-intel

AMD-based processors

If your compute host is AMD-based, run these commands as root to load the kernel modules:

# modprobe kvm
# modprobe kvm-amd

Add these lines to /etc/modules file so that these modules load on reboot:

kvm
kvm-amd
For POWER based systems

KVM as a hypervisor is supported on POWER system’s PowerNV platform.

  1. To determine if your POWER platform supports KVM based virtualization run the following command:

    # cat /proc/cpuinfo | grep PowerNV
    

    If the previous command generates the following output, then CPU supports KVM based virtualization.

    platform: PowerNV
    

    If no output is displayed, then your POWER platform does not support KVM based hardware virtualization.

  2. To list the loaded kernel modules and verify that the kvm modules are loaded, run the following command:

    # lsmod | grep kvm
    

    If the output includes kvm_hv, the kvm hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.

    If the output does not show that the kvm module is loaded, run the following command to load it:

    # modprobe -a kvm
    

    For PowerNV platform, run the following command:

    # modprobe -a kvm-hv
    

    Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.

Configure Compute backing storage

Backing Storage is the storage used to provide the expanded operating system image, and any ephemeral storage. Inside the virtual machine, this is normally presented as two virtual hard disks (for example, /dev/vda and /dev/vdb respectively). However, inside OpenStack, this can be derived from one of three methods: lvm, qcow or raw, chosen using the images_type option in nova.conf on the compute node.

QCOW is the default backing store. It uses a copy-on-write philosophy to delay allocation of storage until it is actually needed. This means that the space required for the backing of an image can be significantly less on the real disk than what seems available in the virtual machine operating system.

RAW creates files without any sort of file formatting, effectively creating files with the plain binary one would normally see on a real disk. This can increase performance, but means that the entire size of the virtual disk is reserved on the physical disk.

Local LVM volumes can also be used. Set images_volume_group = nova_local where nova_local is the name of the LVM group you have created.

Specify the CPU model of KVM guests

The Compute service enables you to control the guest CPU model that is exposed to KVM virtual machines. Use cases include:

  • To maximize performance of virtual machines by exposing new host CPU features to the guest
  • To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults

In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard CPU model names. These models are defined in the /usr/share/libvirt/cpu_map.xml file. Check this file to determine which models are supported by your local installation.

Two Compute configuration options in the [libvirt] group of nova.conf define which type of CPU model is exposed to the hypervisor when using KVM: cpu_mode and cpu_model.

The cpu_mode option can take one of the following values: none, host-passthrough, host-model, and custom.

Host model (default for KVM & QEMU)

If your nova.conf file contains cpu_mode=host-model, libvirt identifies the CPU model in /usr/share/libvirt/cpu_map.xml file that most closely matches the host, and requests additional CPU flags to complete the match. This configuration provides the maximum functionality and performance and maintains good reliability and compatibility if the guest is migrated to another host with slightly different host CPUs.

Host pass through

If your nova.conf file contains cpu_mode=host-passthrough, libvirt tells KVM to pass through the host CPU with no modifications. The difference to host-model, instead of just matching feature flags, every last detail of the host CPU is matched. This gives the best performance, and can be important to some apps which check low level CPU details, but it comes at a cost with respect to migration. The guest can only be migrated to a matching host CPU.

Custom

If your nova.conf file contains cpu_mode=custom, you can explicitly specify one of the supported named models using the cpu_model configuration option. For example, to configure the KVM guests to expose Nehalem CPUs, your nova.conf file should contain:

[libvirt]
cpu_mode = custom
cpu_model = Nehalem
None (default for all libvirt-driven hypervisors other than KVM & QEMU)

If your nova.conf file contains cpu_mode=none, libvirt does not specify a CPU model. Instead, the hypervisor chooses the default model.

Guest agent support

Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol.

To enable this feature, you must set hw_qemu_guest_agent=yes as a metadata parameter on the image you wish to use to create the guest-agent-capable instances from. You can explicitly disable the feature by setting hw_qemu_guest_agent=no in the image metadata.

KVM performance tweaks

The VHostNet kernel module improves network performance. To load the kernel module, run the following command as root:

# modprobe vhost_net
Troubleshoot KVM

Trying to launch a new virtual machine instance fails with the ERROR state, and the following error appears in the /var/log/nova/nova-compute.log file:

libvirtError: internal error no supported architecture for os type 'hvm'

This message indicates that the KVM kernel modules were not loaded.

If you cannot start VMs after installation without rebooting, the permissions might not be set correctly. This can happen if you load the KVM module before you install nova-compute. To check whether the group is set to kvm, run:

# ls -l /dev/kvm

If it is not set to kvm, run:

# udevadm trigger
QEMU

From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment.

The typical uses cases for QEMU are

  • Running on older hardware that lacks virtualization support.
  • Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests.

To enable QEMU, add these settings to nova.conf:

compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = qemu

For some operations you may also have to install the guestmount utility:

On Ubuntu:

# apt-get install guestmount

On Red Hat Enterprise Linux, Fedora, or CentOS:

# yum install libguestfs-tools

On openSUSE:

# zypper install guestfs-tools

The QEMU hypervisor supports the following virtual machine image formats:

  • Raw
  • QEMU Copy-on-write (qcow2)
  • VMware virtual machine disk format (vmdk)
XenServer (and other XAPI based Xen variants)

This section describes XAPI managed hypervisors, and how to use them with OpenStack.

Terminology
Xen

A hypervisor that provides the fundamental isolation between virtual machines. Xen is open source (GPLv2) and is managed by XenProject.org, a cross-industry organization and a Linux Foundation Collaborative project.

Xen is a component of many different products and projects. The hypervisor itself is very similar across all these projects, but the way that it is managed can be different, which can cause confusion if you’re not clear which toolstack you are using. Make sure you know what toolstack you want before you get started. If you want to use Xen with libvirt in OpenStack Compute refer to Xen via libvirt.

XAPI

XAPI is one of the toolstacks that could control a Xen based hypervisor. XAPI’s role is similar to libvirt’s in the KVM world. The API provided by XAPI is called XenAPI. To learn more about the provided interface, look at XenAPI Object Model Overview for definitions of XAPI specific terms such as SR, VDI, VIF and PIF.

OpenStack has a compute driver which talks to XAPI, therefore all XAPI managed servers could be used with OpenStack.

XenAPI

XenAPI is the API provided by XAPI. This name is also used by the python library that is a client for XAPI. A set of packages to use XenAPI on existing distributions can be built using the xenserver/buildroot project.

XenServer

An Open Source virtualization platform that delivers all features needed for any server and datacenter implementation including the Xen hypervisor and XAPI for the management. For more information and product downloads, visit xenserver.org.

XCP

XCP is not supported anymore. XCP project recommends all XCP users to upgrade to the latest version of XenServer by visiting xenserver.org.

Privileged and unprivileged domains

A Xen host runs a number of virtual machines, VMs, or domains (the terms are synonymous on Xen). One of these is in charge of running the rest of the system, and is known as domain 0, or dom0. It is the first domain to boot after Xen, and owns the storage and networking hardware, the device drivers, and the primary control software. Any other VM is unprivileged, and is known as a domU or guest. All customer VMs are unprivileged, but you should note that on XenServer (and other XenAPI using hypervisors), the OpenStack Compute service (nova-compute) also runs in a domU. This gives a level of security isolation between the privileged system software and the OpenStack software (much of which is customer-facing). This architecture is described in more detail later.

Paravirtualized versus hardware virtualized domains

A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). This refers to the interaction between Xen, domain 0, and the guest VM’s kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and domain 0; this gives them better performance characteristics. HVM guests are not aware of their environment, and the hardware has to pretend that they are running on an unvirtualized machine. HVM guests do not need to modify the guest operating system, which is essential when running Windows.

In OpenStack, customer VMs may run in either PV or HVM mode. However, the OpenStack domU (that’s the one running nova-compute) must be running in PV mode.

XenAPI deployment architecture

A basic OpenStack deployment on a XAPI-managed server, assuming that the network provider is nova-network, looks like this:

_images/xenserver_architecture.png

Key things to note:

  • The hypervisor: Xen
  • Domain 0: runs XAPI and some small pieces from OpenStack, the XAPI plug-ins.
  • OpenStack VM: The Compute service runs in a paravirtualized virtual machine, on the host under management. Each host runs a local instance of Compute. It is also running an instance of nova-network.
  • OpenStack Compute uses the XenAPI Python library to talk to XAPI, and it uses the Management Network to reach from the OpenStack VM to Domain 0.

Some notes on the networking:

  • The above diagram assumes FlatDHCP networking.
  • There are three main OpenStack networks:
    • Management network: RabbitMQ, MySQL, inter-host communication, and compute-XAPI communication. Please note that the VM images are downloaded by the XenAPI plug-ins, so make sure that the OpenStack Image service is accessible through this network. It usually means binding those services to the management interface.
    • Tenant network: controlled by nova-network, this is used for tenant traffic.
    • Public network: floating IPs, public API endpoints.
  • The networks shown here must be connected to the corresponding physical networks within the data center. In the simplest case, three individual physical network cards could be used. It is also possible to use VLANs to separate these networks. Please note, that the selected configuration must be in line with the networking model selected for the cloud. (In case of VLAN networking, the physical channels have to be able to forward the tagged traffic.)
  • If you are using Networking service, enable Linux bridge in Dom0 which is used for Compute service. nova-compute will create Linux bridges for security group and neutron-openvswitch-agent in Compute node will apply security group rules on these Linux bridges. To implement this, you need to remove /etc/modprobe.d/blacklist-bridge* in Dom0.
Install XenServer

Before you can run OpenStack with XenServer, you must install the hypervisor on an appropriate server.

Note

Xen is a type 1 hypervisor: When your server starts, Xen is the first software that runs. Consequently, you must install XenServer before you install the operating system where you want to run OpenStack code. You then install nova-compute into a dedicated virtual machine on the host.

Use the following link to download XenServer’s installation media:

When you install many servers, you might find it easier to perform PXE boot installations. You can also package any post-installation changes that you want to make to your XenServer by following the instructions of creating your own XenServer supplemental pack.

Important

Make sure you use the EXT type of storage repository (SR). Features that require access to VHD files (such as copy on write, snapshot and migration) do not work when you use the LVM SR. Storage repository (SR) is a XAPI-specific term relating to the physical storage where virtual disks are stored.

On the XenServer installation screen, choose the XenDesktop Optimized option. If you use an answer file, make sure you use srtype="ext" in the installation tag of the answer file.

Post-installation steps

The following steps need to be completed after the hypervisor’s installation:

  1. For resize and migrate functionality, enable password-less SSH authentication and set up the /images directory on dom0.
  2. Install the XAPI plug-ins.
  3. To support AMI type images, you must set up /boot/guest symlink/directory in dom0.
  4. Create a paravirtualized virtual machine that can run nova-compute.
  5. Install and configure nova-compute in the above virtual machine.
Install XAPI plug-ins

When you use a XAPI managed hypervisor, you can install a Python script (or any executable) on the host side, and execute that through XenAPI. These scripts are called plug-ins. The OpenStack related XAPI plug-ins live in OpenStack Compute’s code repository. These plug-ins have to be copied to dom0’s filesystem, to the appropriate directory, where XAPI can find them. It is important to ensure that the version of the plug-ins are in line with the OpenStack Compute installation you are using.

The plugins should typically be copied from the Nova installation running in the Compute’s DomU, but if you want to download the latest version the following procedure can be used.

Manually installing the plug-ins

  1. Create temporary files/directories:

    $ NOVA_TARBALL=$(mktemp)
    $ NOVA_SOURCES=$(mktemp -d)
    
  2. Get the source from the openstack.org archives. The example assumes the master branch is used, and the XenServer host is accessible as xenserver. Match those parameters to your setup.

    $ NOVA_URL=https://tarballs.openstack.org/nova/nova-master.tar.gz
    $ wget -qO "$NOVA_TARBALL" "$NOVA_URL"
    $ tar xvf "$NOVA_TARBALL" -d "$NOVA_SOURCES"
    
  3. Copy the plug-ins to the hypervisor:

    $ PLUGINPATH=$(find $NOVA_SOURCES -path '*/xapi.d/plugins' -type d -print)
    $ tar -czf - -C "$PLUGINPATH" ./ |
    > ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins
    
  4. Remove temporary files/directories:</para>

    $ rm "$NOVA_ZIPBALL"
    $ rm -rf "$NOVA_SOURCES"
    
Prepare for AMI type images

To support AMI type images in your OpenStack installation, you must create the /boot/guest directory on dom0. One of the OpenStack XAPI plugins will extract the kernel and ramdisk from AKI and ARI images and put them to that directory.

OpenStack maintains the contents of this directory and its size should not increase during normal operation. However, in case of power failures or accidental shutdowns, some files might be left over. To prevent these files from filling up dom0’s filesystem, set up this directory as a symlink that points to a subdirectory of the local SR.

Run these commands in dom0 to achieve this setup:

# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
# LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels"
# mkdir -p "$LOCALPATH"
# ln -s "$LOCALPATH" /boot/guest
Modify dom0 for resize/migration support

To resize servers with XenServer you must:

  • Establish a root trust between all hypervisor nodes of your deployment:

    To do so, generate an ssh key-pair with the ssh-keygen command. Ensure that each of your dom0’s authorized_keys file (located in /root/.ssh/authorized_keys) contains the public key fingerprint (located in /root/.ssh/id_rsa.pub).

  • Provide a /images mount point to the dom0 for your hypervisor:

    dom0 space is at a premium so creating a directory in dom0 is potentially dangerous and likely to fail especially when you resize large servers. The least you can do is to symlink /images to your local storage SR. The following instructions work for an English-based installation of XenServer and in the case of ext3-based SR (with which the resize functionality is known to work correctly).

    # LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
    # IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images"
    # mkdir -p "$IMG_DIR"
    # ln -s "$IMG_DIR" /images
    
XenAPI configuration reference

The following section discusses some commonly changed options when using the XenAPI driver. The table below provides a complete reference of all configuration options available for configuring XAPI with OpenStack.

The recommended way to use XAPI with OpenStack is through the XenAPI driver. To enable the XenAPI driver, add the following configuration options to /etc/nova/nova.conf and restart OpenStack Compute:

compute_driver = xenapi.XenAPIDriver
[xenserver]
connection_url = http://your_xenapi_management_ip_address
connection_username = root
connection_password = your_password

These connection details are used by OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer management console, to your XenServer node.

Note

The connection_url is generally the management network IP address of the XenServer.

Agent

The agent is a piece of software that runs on the instances, and communicates with OpenStack. In case of the XenAPI driver, the agent communicates with OpenStack through XenStore (see the Xen Project Wiki for more information on XenStore).

If you don’t have the guest agent on your VMs, it takes a long time for OpenStack Compute to detect that the VM has successfully started. Generally a large timeout is required for Windows instances, but you may want to adjust: agent_version_timeout within the [xenserver] section.

VNC proxy address

Assuming you are talking to XAPI through a management network, and XenServer is on the address: 10.10.1.34 specify the same address for the vnc proxy address: vncserver_proxyclient_address=10.10.1.34

Storage

You can specify which Storage Repository to use with nova by editing the following flag. To use the local-storage setup by the default installer:

sr_matching_filter = "other-config:i18n-key=local-storage"

Another alternative is to use the “default” storage (for example if you have attached NFS or any other shared storage):

sr_matching_filter = "default-sr:true"
Image upload in tgz compressed format

To start uploading tgz compressed raw disk images to the Image service, configure xenapi_image_upload_handler by replacing GlanceStore with VdiThroughDevStore.

xenapi_image_upload_handler=nova.virt.xenapi.image.vdi_through_dev.VdiThroughDevStore

As opposed to:

xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
XenAPI configuration reference

To customize the XenAPI driver, use the configuration option settings documented in Description of Xen configuration options.

Xen via libvirt

OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be integrated with OpenStack Compute via the libvirt toolstack or via the XAPI toolstack. This section describes how to set up OpenStack Compute with Xen and libvirt. For information on how to set up Xen with XAPI refer to XenServer (and other XAPI based Xen variants).

Installing Xen with libvirt

At this stage we recommend using the baseline that we use for the Xen Project OpenStack CI Loop, which contains the most recent stability fixes to both Xen and libvirt.

Xen 4.5.1 (or newer) and libvirt 1.2.15 (or newer) contain the minimum required OpenStack improvements for Xen. Although libvirt 1.2.15 works with Xen, libvirt 1.3.2 or newer is recommended. The necessary Xen changes have also been backported to the Xen 4.4.3 stable branch. Please check with the Linux and FreeBSD distros you are intending to use as Dom 0, whether the relevant version of Xen and libvirt are available as installable packages.

The latest releases of Xen and libvirt packages that fulfil the above minimum requirements for the various openSUSE distributions can always be found and installed from the Open Build Service Virtualization project. To install these latest packages, add the Virtualization repository to your software management stack and get the newest packages from there. More information about the latest Xen and libvirt packages are available here and here.

Alternatively, it is possible to use the Ubuntu LTS 14.04 Xen Package 4.4.1-0ubuntu0.14.04.4 (Xen 4.4.1) and apply the patches outlined here. You can also use the Ubuntu LTS 14.04 libvirt package 1.2.2 libvirt_1.2.2-0ubuntu13.1.7 as baseline and update it to libvirt version 1.2.15, or 1.2.14 with the patches outlined here applied. Note that this will require rebuilding these packages partly from source.

For further information and latest developments, you may want to consult the Xen Project’s mailing lists for OpenStack related issues and questions.

Configuring Xen with libvirt

To enable Xen via libvirt, ensure the following options are set in /etc/nova/nova.conf on all hosts running the nova-compute service.

compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = xen
Additional configuration options

Use the following as a guideline for configuring Xen for use in OpenStack:

  1. Dom0 memory: Set it between 1GB and 4GB by adding the following parameter to the Xen Boot Options in the grub.conf file.

    dom0_mem=1024M
    

    Note

    The above memory limits are suggestions and should be based on the available compute host resources. For large hosts that will run many hundreds of instances, the suggested values may need to be higher.

    Note

    The location of the grub.conf file depends on the host Linux distribution that you are using. Please refer to the distro documentation for more details (see Dom 0 for more resources).

  2. Dom0 vcpus: Set the virtual CPUs to 4 and employ CPU pinning by adding the following parameters to the Xen Boot Options in the grub.conf file.

    dom0_max_vcpus=4 dom0_vcpus_pin
    

    Note

    Note that the above virtual CPU limits are suggestions and should be based on the available compute host resources. For large hosts, that will run many hundred of instances, the suggested values may need to be higher.

  3. PV vs HVM guests: A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). The virtualization mode determines the interaction between Xen, Dom 0, and the guest VM’s kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and Dom 0. The choice of virtualization mode determines performance characteristics. For an overview of Xen virtualization modes, see Xen Guest Types.

    In OpenStack, customer VMs may run in either PV or HVM mode. The mode is a property of the operating system image used by the VM, and is changed by adjusting the image metadata stored in the Image service. The image metadata can be changed using the nova or glance commands.

    To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use nova or glance to set the vm_mode property to hvm.

    To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use one of the following two commands:

    $ nova image-meta img-uuid set vm_mode=hvm
    
    $ glance image-update img-uuid --property vm_mode=hvm
    

    To chose PV mode, which is supported by NetBSD, FreeBSD and Linux, use one of the following two commands

    $ nova image-meta img-uuid set vm_mode=xen
    
    $ glance image-update img-uuid --property vm_mode=xen
    

    Note

    The default for virtualization mode in nova is PV mode.

  4. Image formats: Xen supports raw, qcow2 and vhd image formats. For more information on image formats, refer to the OpenStack Virtual Image Guide and the Storage Options Guide on the Xen Project Wiki.

  5. Image metadata: In addition to the vm_mode property discussed above, the hypervisor_type property is another important component of the image metadata, especially if your cloud contains mixed hypervisor compute nodes. Setting the hypervisor_type property allows the nova scheduler to select a compute node running the specified hypervisor when launching instances of the image. Image metadata such as vm_mode, hypervisor_type, architecture, and others can be set when importing the image to the Image service. The metadata can also be changed using the nova or glance commands:

    $ nova image-meta img-uuid set hypervisor_type=xen vm_mode=hvm
    
    $ glance image-update img-uuid --property hypervisor_type=xen --property vm_mode=hvm
    

    For more more information on image metadata, refer to the OpenStack Virtual Image Guide.

  6. Libguestfs file injection: OpenStack compute nodes can use libguestfs to inject files into an instance’s image prior to launching the instance. libguestfs uses libvirt’s QEMU driver to start a qemu process, which is then used to inject files into the image. When using libguestfs for file injection, the compute node must have the libvirt qemu driver installed, in addition to the Xen driver. In RPM based distributions, the qemu driver is provided by the libvirt-daemon-qemu package. In Debian and Ubuntu, the qemu driver is provided by the libvirt-bin package.

To customize the libvirt driver, use the configuration option settings documented in Description of Xen configuration options.

Troubleshoot Xen with libvirt

Important log files: When an instance fails to start, or when you come across other issues, you should first consult the following log files:

If you need further help you can ask questions on the mailing lists xen-users@, wg-openstack@ or raise a bug against Xen.

Known issues
  • Networking: Xen via libvirt is currently only supported with nova-network. Fixes for a number of bugs are currently being worked on to make sure that Xen via libvirt will also work with OpenStack Networking (neutron).
  • Live migration: Live migration is supported in the libvirt libxl driver since version 1.2.5. However, there were a number of issues when used with OpenStack, in particular with libvirt migration protocol compatibility. It is worth mentioning that libvirt 1.3.0 addresses most of these issues. We do however recommend using libvirt 1.3.2, which is fully supported and tested as part of the Xen Project CI loop. It addresses live migration monitoring related issues and adds support for peer-to-peer migration mode, which nova relies on.
  • Live migration monitoring: On compute nodes running Kilo or later, live migration monitoring relies on libvirt APIs that are only implemented from libvirt version 1.3.1 onwards. When attempting to live migrate, the migration monitoring thread would crash and leave the instance state as “MIGRATING”. If you experience such an issue and you are running on a version released before libvirt 1.3.1, make sure you backport libvirt commits ad71665 and b7b4391 from upstream.
Additional information and resources

The following section contains links to other useful resources.

LXC (Linux containers)

LXC (also known as Linux containers) is a virtualization technology that works at the operating system level. This is different from hardware virtualization, the approach used by other hypervisors such as KVM, Xen, and VMware. LXC (as currently implemented using libvirt in the Compute service) is not a secure virtualization technology for multi-tenant environments (specifically, containers may affect resource quotas for other containers hosted on the same machine). Additional containment technologies, such as AppArmor, may be used to provide better isolation between containers, although this is not the case by default. For all these reasons, the choice of this virtualization technology is not recommended in production.

If your compute hosts do not have hardware support for virtualization, LXC will likely provide better performance than QEMU. In addition, if your guests must access specialized hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors.

Note

Some OpenStack Compute features might be missing when running with LXC as the hypervisor. See the hypervisor support matrix for details.

To enable LXC, ensure the following options are set in /etc/nova/nova.conf on all hosts running the nova-compute service.

compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = lxc

On Ubuntu, enable LXC support in OpenStack by installing the nova-compute-lxc package.

VMware vSphere
Introduction

OpenStack Compute supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS).

This section describes how to configure VMware-based virtual machine images for launch. The VMware driver supports vCenter version 5.5.0 and later.

The VMware vCenter driver enables the nova-compute service to communicate with a VMware vCenter server that manages one or more ESX host clusters. The driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can use all vSphere features.

The following sections describe how to configure the VMware vCenter driver.

High-level architecture

The following diagram shows a high-level view of the VMware driver architecture:

VMware driver architecture

_images/vmware-nova-driver-architecture.jpg

As the figure shows, the OpenStack Compute Scheduler sees three hypervisors that each correspond to a cluster in vCenter. nova-compute contains the VMware driver. You can run with multiple nova-compute services. It is recommended to run with one nova-compute service per ESX cluster thus ensuring that while Compute schedules at the granularity of the nova-compute service it is also in effect able to schedule at the cluster level. In turn the VMware driver inside nova-compute interacts with the vCenter APIs to select an appropriate ESX host within the cluster. Internally, vCenter uses DRS for placement.

The VMware vCenter driver also interacts with the Image service to copy VMDK images from the Image service back-end store. The dotted line in the figure represents VMDK images being copied from the OpenStack Image service to the vSphere data store. VMDK images are cached in the data store so the copy operation is only required the first time that the VMDK image is used.

After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in vCenter and can access vSphere advanced features. At the same time, the VM is visible in the OpenStack dashboard and you can manage it as you would any other OpenStack VM. You can perform advanced vSphere operations in vCenter while you configure OpenStack resources such as VMs through the OpenStack dashboard.

The figure does not show how networking fits into the architecture. Both nova-network and the OpenStack Networking Service are supported. For details, see Networking with VMware vSphere.

Configuration overview

To get started with the VMware vCenter driver, complete the following high-level steps:

  1. Configure vCenter. See Prerequisites and limitations.
  2. Configure the VMware vCenter driver in the nova.conf file. See VMware vCenter driver.
  3. Load desired VMDK images into the Image service. See Images with VMware vSphere.
  4. Configure networking with either nova-network or the Networking service. See Networking with VMware vSphere.
Prerequisites and limitations

Use the following list to prepare a vSphere environment that runs with the VMware vCenter driver:

Copying VMDK files
In vSphere 5.1, copying large image files (for example, 12 GB and greater) from the Image service can take a long time. To improve performance, VMware recommends that you upgrade to VMware vCenter Server 5.1 Update 1 or later. For more information, see the Release Notes.
DRS
For any cluster that contains multiple ESX hosts, enable DRS and enable fully automated placement.
Shared storage
Only shared storage is supported and data stores must be shared among all hosts in a cluster. It is recommended to remove data stores not intended for OpenStack from clusters being configured for OpenStack.
Clusters and data stores
Do not use OpenStack clusters and data stores for other purposes. If you do, OpenStack displays incorrect usage information.
Networking
The networking configuration depends on the desired networking model. See Networking with VMware vSphere.
Security groups

If you use the VMware driver with OpenStack Networking and the NSX plug-in, security groups are supported. If you use nova-network, security groups are not supported.

Note

The NSX plug-in is the only plug-in that is validated for vSphere.

VNC

The port range 5900 - 6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control.

Note

In addition to the default VNC port numbers (5900 to 6000) specified in the above document, the following ports are also used: 6101, 6102, and 6105.

You must modify the ESXi firewall configuration to allow the VNC ports. Additionally, for the firewall modifications to persist after a reboot, you must create a custom vSphere Installation Bundle (VIB) which is then installed onto the running ESXi host or added to a custom image profile used to install ESXi hosts. For details about how to create a VIB for persisting the firewall configuration modifications, see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007381.

Note

The VIB can be downloaded from https://github.com/openstack-vmwareapi-team/Tools.

To use multiple vCenter installations with OpenStack, each vCenter must be assigned to a separate availability zone. This is required as the OpenStack Block Storage VMDK driver does not currently work across multiple vCenter installations.

VMware vCenter service account

OpenStack integration requires a vCenter service account with the following minimum permissions. Apply the permissions to the Datacenter root object, and select the Propagate to Child Objects option.

vCenter permissions tree
All Privileges      
  Datastore    
    Allocate space  
    Browse datastore  
    Low level file operation  
    Remove file  
  Extension    
    Register extension  
  Folder    
    Create folder  
  Host    
    Configuration  
      Maintenance
      Network configuration
      Storage partition configuration
  Network    
    Assign network  
  Resource    
    Assign virtual machine to resource pool  
    Migrate powered off virtual machine  
    Migrate powered on virtual machine  
  Virtual Machine    
    Configuration  
      Add existing disk
      Add new disk
      Add or remove device
      Advanced
      CPU count
      Change resource
      Disk change tracking
      Host USB device
      Memory
      Modify device settings
      Raw device
      Remove disk
      Rename
      Swapfile placement
    Interaction  
      Configure CD media
      Power Off
      Power On
      Reset
      Suspend
    Inventory  
      Create from existing
      Create new
      Move
      Remove
      Unregister
    Provisioning  
      Clone virtual machine
      Customize
      Create template from virtual machine
    Snapshot management  
      Create snapshot
      Remove snapshot
  Sessions    
      Validate session
      View and stop sessions
  vApp    
    Export  
    Import  
VMware vCenter driver

Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. This recommended configuration enables access through vCenter to advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS).

VMwareVCDriver configuration options

Add the following VMware-specific configuration options to the nova.conf file:

[DEFAULT]
compute_driver = vmwareapi.VMwareVCDriver

[vmware]
host_ip = <vCenter hostname or IP address>
host_username = <vCenter username>
host_password = <vCenter password>
cluster_name = <vCenter cluster name>
datastore_regex = <optional datastore regex>

Note

  • Clusters: The vCenter driver can support only a single cluster. Clusters and data stores used by the vCenter driver should not contain any VMs other than those created by the driver.
  • Data stores: The datastore_regex setting specifies the data stores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with “nas”. If this line is omitted, Compute uses the first data store returned by the vSphere API. It is recommended not to use this field and instead remove data stores that are not intended for OpenStack.
  • Reserved host memory: The reserved_host_memory_mb option value is 512 MB by default. However, VMware recommends that you set this option to 0 MB because the vCenter driver reports the effective memory available to the virtual machines.
  • The vCenter driver generates instance name by instance ID. Instance name template is ignored.
  • The minimum supported vCenter version is 5.5.0. Starting in the OpenStack Ocata release any version lower than 5.5.0 will be logged as a warning. In the OpenStack Pike release this will be enforced.

A nova-compute service can control one or more clusters containing multiple ESXi hosts, making nova-compute a critical service from a high availability perspective. Because the host that runs nova-compute can fail while the vCenter and ESX still run, you must protect the nova-compute service against host failures.

Note

Many nova.conf options are relevant to libvirt but do not apply to this driver.

Images with VMware vSphere

The vCenter driver supports images in the VMDK format. Disks in this format can be obtained from VMware Fusion or from an ESX environment. It is also possible to convert other formats, such as qcow2, to the VMDK format using the qemu-img utility. After a VMDK disk is available, load it into the Image service. Then, you can use it with the VMware vCenter driver. The following sections provide additional details on the supported disks and the commands used for conversion and upload.

Supported image types

Upload images to the OpenStack Image service in VMDK format. The following VMDK disk types are supported:

  • VMFS Flat Disks (includes thin, thick, zeroedthick, and eagerzeroedthick). Note that once a VMFS thin disk is exported from VMFS to a non-VMFS location, like the OpenStack Image service, it becomes a preallocated flat disk. This impacts the transfer time from the Image service to the data store when the full preallocated flat disk, rather than the thin disk, must be transferred.
  • Monolithic Sparse disks. Sparse disks get imported from the Image service into ESXi as thin provisioned disks. Monolithic Sparse disks can be obtained from VMware Fusion or can be created by converting from other virtual disk formats using the qemu-img utility.
  • Stream-optimized disks. Stream-optimized disks are compressed sparse disks. They can be obtained from VMware vCenter/ESXi when exporting vm to ovf/ova template.

The following table shows the vmware_disktype property that applies to each of the supported VMDK disk types:

OpenStack Image service disk type settings
vmware_disktype property VMDK disk type
sparse Monolithic Sparse
thin VMFS flat, thin provisioned
preallocated (default) VMFS flat, thick/zeroedthick/eagerzeroedthick
streamOptimized Compressed Sparse

The vmware_disktype property is set when an image is loaded into the Image service. For example, the following command creates a Monolithic Sparse image by setting vmware_disktype to sparse:

$ openstack image create \
  --disk-format vmdk \
  --container-format bare \
  --property vmware_disktype="sparse" \
  --property vmware_ostype="ubuntu64Guest" \
  ubuntu-sparse < ubuntuLTS-sparse.vmdk

Note

Specifying thin does not provide any advantage over preallocated with the current version of the driver. Future versions might restore the thin properties of the disk after it is downloaded to a vSphere data store.

The following table shows the vmware_ostype property that applies to each of the supported guest OS:

OpenStack Image service OS type settings
vmware_ostype property Retail Name
asianux3_64Guest Asianux Server 3 (64 bit)
asianux3Guest Asianux Server 3
asianux4_64Guest Asianux Server 4 (64 bit)
asianux4Guest Asianux Server 4
darwin64Guest Darwin 64 bit
darwinGuest Darwin
debian4_64Guest Debian GNU/Linux 4 (64 bit)
debian4Guest Debian GNU/Linux 4
debian5_64Guest Debian GNU/Linux 5 (64 bit)
debian5Guest Debian GNU/Linux 5
dosGuest MS-DOS
freebsd64Guest FreeBSD x64
freebsdGuest FreeBSD
mandrivaGuest Mandriva Linux
netware4Guest Novell NetWare 4
netware5Guest Novell NetWare 5.1
netware6Guest Novell NetWare 6.x
nld9Guest Novell Linux Desktop 9
oesGuest Open Enterprise Server
openServer5Guest SCO OpenServer 5
openServer6Guest SCO OpenServer 6
opensuse64Guest openSUSE (64 bit)
opensuseGuest openSUSE
os2Guest OS/2
other24xLinux64Guest Linux 2.4x Kernel (64 bit) (experimental)
other24xLinuxGuest Linux 2.4x Kernel
other26xLinux64Guest Linux 2.6x Kernel (64 bit) (experimental)
other26xLinuxGuest Linux 2.6x Kernel (experimental)
otherGuest Other Operating System
otherGuest64 Other Operating System (64 bit) (experimental)
otherLinux64Guest Linux (64 bit) (experimental)
otherLinuxGuest Other Linux
redhatGuest Red Hat Linux 2.1
rhel2Guest Red Hat Enterprise Linux 2
rhel3_64Guest Red Hat Enterprise Linux 3 (64 bit)
rhel3Guest Red Hat Enterprise Linux 3
rhel4_64Guest Red Hat Enterprise Linux 4 (64 bit)
rhel4Guest Red Hat Enterprise Linux 4
rhel5_64Guest Red Hat Enterprise Linux 5 (64 bit) (experimental)
rhel5Guest Red Hat Enterprise Linux 5
rhel6_64Guest Red Hat Enterprise Linux 6 (64 bit)
rhel6Guest Red Hat Enterprise Linux 6
sjdsGuest Sun Java Desktop System
sles10_64Guest SUSE Linux Enterprise Server 10 (64 bit) (experimental)
sles10Guest SUSE Linux Enterprise Server 10
sles11_64Guest SUSE Linux Enterprise Server 11 (64 bit)
sles11Guest SUSE Linux Enterprise Server 11
sles64Guest SUSE Linux Enterprise Server 9 (64 bit)
slesGuest SUSE Linux Enterprise Server 9
solaris10_64Guest Solaris 10 (64 bit) (experimental)
solaris10Guest Solaris 10 (32 bit) (experimental)
solaris6Guest Solaris 6
solaris7Guest Solaris 7
solaris8Guest Solaris 8
solaris9Guest Solaris 9
suse64Guest SUSE Linux (64 bit)
suseGuest SUSE Linux
turboLinux64Guest Turbolinux (64 bit)
turboLinuxGuest Turbolinux
ubuntu64Guest Ubuntu Linux (64 bit)
ubuntuGuest Ubuntu Linux
unixWare7Guest SCO UnixWare 7
win2000AdvServGuest Windows 2000 Advanced Server
win2000ProGuest Windows 2000 Professional
win2000ServGuest Windows 2000 Server
win31Guest Windows 3.1
win95Guest Windows 95
win98Guest Windows 98
windows7_64Guest Windows 7 (64 bit)
windows7Guest Windows 7
windows7Server64Guest Windows Server 2008 R2 (64 bit)
winLonghorn64Guest Windows Longhorn (64 bit) (experimental)
winLonghornGuest Windows Longhorn (experimental)
winMeGuest Windows Millennium Edition
winNetBusinessGuest Windows Small Business Server 2003
winNetDatacenter64Guest Windows Server 2003, Datacenter Edition (64 bit) (experimental)
winNetDatacenterGuest Windows Server 2003, Datacenter Edition
winNetEnterprise64Guest Windows Server 2003, Enterprise Edition (64 bit)
winNetEnterpriseGuest Windows Server 2003, Enterprise Edition
winNetStandard64Guest Windows Server 2003, Standard Edition (64 bit)
winNetEnterpriseGuest Windows Server 2003, Enterprise Edition
winNetStandard64Guest Windows Server 2003, Standard Edition (64 bit)
winNetStandardGuest Windows Server 2003, Standard Edition
winNetWebGuest Windows Server 2003, Web Edition
winNTGuest Windows NT 4
winVista64Guest Windows Vista (64 bit)
winVistaGuest Windows Vista
winXPHomeGuest Windows XP Home Edition
winXPPro64Guest Windows XP Professional Edition (64 bit)
winXPProGuest Windows XP Professional
Convert and load images

Using the qemu-img utility, disk images in several formats (such as, qcow2) can be converted to the VMDK format.

For example, the following command can be used to convert a qcow2 Ubuntu Trusty cloud image:

$ qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img \
  -O vmdk trusty-server-cloudimg-amd64-disk1.vmdk

VMDK disks converted through qemu-img are always monolithic sparse VMDK disks with an IDE adapter type. Using the previous example of the Ubuntu Trusty image after the qemu-img conversion, the command to upload the VMDK disk should be something like:

$ openstack image create \
  --container-format bare --disk-format vmdk \
  --property vmware_disktype="sparse" \
  --property vmware_adaptertype="ide" \
  trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk

Note that the vmware_disktype is set to sparse and the vmware_adaptertype is set to ide in the previous command.

If the image did not come from the qemu-img utility, the vmware_disktype and vmware_adaptertype might be different. To determine the image adapter type from an image file, use the following command and look for the ddb.adapterType= line:

$ head -20 <vmdk file name>

Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk:

$ openstack image create \
  --disk-format vmdk \
  --container-format bare \
  --property vmware_adaptertype="lsiLogic" \
  --property vmware_disktype="preallocated" \
  --property vmware_ostype="ubuntu64Guest" \
  ubuntu-thick-scsi < ubuntuLTS-flat.vmdk

Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter types (such as, busLogic, lsiLogic, lsiLogicsas, paraVirtual) cannot be attached to the IDE controller. Therefore, as the previous examples show, it is important to set the vmware_adaptertype property correctly. The default adapter type is lsiLogic, which is SCSI, so you can omit the vmware_adaptertype property if you are certain that the image adapter type is lsiLogic.

Tag VMware images

In a mixed hypervisor environment, OpenStack Compute uses the hypervisor_type tag to match images to the correct hypervisor type. For VMware images, set the hypervisor type to vmware. Other valid hypervisor types include: hyperv, ironic, lxc, qemu, uml, and xen. Note that qemu is used for both QEMU and KVM hypervisor types.

$ openstack image create \
  --disk-format vmdk \
  --container-format bare \
  --property vmware_adaptertype="lsiLogic" \
  --property vmware_disktype="preallocated" \
  --property hypervisor_type="vmware" \
  --property vmware_ostype="ubuntu64Guest" \
  ubuntu-thick-scsi < ubuntuLTS-flat.vmdk
Optimize images

Monolithic Sparse disks are considerably faster to download but have the overhead of an additional conversion step. When imported into ESX, sparse disks get converted to VMFS flat thin provisioned disks. The download and conversion steps only affect the first launched instance that uses the sparse disk image. The converted disk image is cached, so subsequent instances that use this disk image can simply use the cached version.

To avoid the conversion step (at the cost of longer download times) consider converting sparse disks to thin provisioned or preallocated disks before loading them into the Image service.

Use one of the following tools to pre-convert sparse disks.

vSphere CLI tools

Sometimes called the remote CLI or rCLI.

Assuming that the sparse disk is made available on a data store accessible by an ESX host, the following command converts it to preallocated format:

vmkfstools --server=ip_of_some_ESX_host -i \
  /vmfs/volumes/datastore1/sparse.vmdk \
  /vmfs/volumes/datastore1/converted.vmdk

Note that the vifs tool from the same CLI package can be used to upload the disk to be converted. The vifs tool can also be used to download the converted disk if necessary.

vmkfstools directly on the ESX host

If the SSH service is enabled on an ESX host, the sparse disk can be uploaded to the ESX data store through scp and the vmkfstools local to the ESX host can use used to perform the conversion. After you log in to the host through ssh, run this command:

vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk
vmware-vdiskmanager

vmware-vdiskmanager is a utility that comes bundled with VMware Fusion and VMware Workstation. The following example converts a sparse disk to preallocated format:

'/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk

In the previous cases, the converted vmdk is actually a pair of files:

  • The descriptor file converted.vmdk.
  • The actual virtual disk data file converted-flat.vmdk.

The file to be uploaded to the Image service is converted-flat.vmdk.

Image handling

The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP from the Image service to a data store that is visible to the hypervisor. To optimize this process, the first time a VMDK file is used, it gets cached in the data store. A cached image is stored in a folder named after the image ID. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the Image service.

Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared data store. To avoid this copy, boot the image in linked_clone mode. To learn how to enable this mode, see Configuration reference.

Note

You can also use the img_linked_clone property (or legacy property vmware_linked_clone) in the Image service to override the linked_clone mode on a per-image basis.

If spawning a virtual machine image from ISO with a VMDK disk, the image is created and attached to the virtual machine as a blank disk. In that case img_linked_clone property for the image is just ignored.

If multiple compute nodes are running on the same host, or have a shared file system, you can enable them to use the same cache folder on the back-end data store. To configure this action, set the cache_prefix option in the nova.conf file. Its value stands for the name prefix of the folder where cached images are stored.

Note

This can take effect only if compute nodes are running on the same host, or have a shared file system.

You can automatically purge unused images after a specified period of time. To configure this action, set these options in the DEFAULT section in the nova.conf file:

remove_unused_base_images
Set this option to True to specify that unused images should be removed after the duration specified in the remove_unused_original_minimum_age_seconds option. The default is True.
remove_unused_original_minimum_age_seconds
Specifies the duration in seconds after which an unused image is purged from the cache. The default is 86400 (24 hours).
Networking with VMware vSphere

The VMware driver supports networking with the nova-network service or the Networking Service. Depending on your installation, complete these configuration steps before you provision VMs:

  1. The nova-network service with the FlatManager or FlatDHCPManager. Create a port group with the same name as the flat_network_bridge value in the nova.conf file. The default value is br100. If you specify another value, the new value must be a valid Linux bridge identifier that adheres to Linux bridge naming conventions.

    All VM NICs are attached to this port group.

    Ensure that the flat interface of the node that runs the nova-network service has a path to this network.

    Note

    When configuring the port binding for this port group in vCenter, specify ephemeral for the port binding type. For more information, see Choosing a port binding type in ESX/ESXi in the VMware Knowledge Base.

  2. The nova-network service with the VlanManager. Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic.

    OpenStack Compute automatically creates the corresponding port groups.

  3. If you are using the OpenStack Networking Service: Before provisioning VMs, create a port group with the same name as the vmware.integration_bridge value in nova.conf (default is br-int). All VM NICs are attached to this port group for management by the OpenStack Networking plug-in.

Volumes with VMware vSphere

The VMware driver supports attaching volumes from the Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be used for managing volumes based on vSphere data stores. For more information about the VMware VMDK driver, see VMware VMDK driver. Also an iSCSI volume driver provides limited support and can be used only for attachments.

Configuration reference

To customize the VMware driver, use the configuration option settings documented in Description of VMware configuration options.

Hyper-V virtualization platform

It is possible to use Hyper-V as a compute node within an OpenStack Deployment. The nova-compute service runs as openstack-compute, a 32-bit service directly upon the Windows platform with the Hyper-V role enabled. The necessary Python components as well as the nova-compute service are installed directly onto the Windows platform. Windows Clustering Services are not needed for functionality within the OpenStack infrastructure. The use of the Windows Server 2012 platform is recommend for the best experience and is the platform for active development. The following Windows platforms have been tested as compute nodes:

Windows Server 2012 and Windows Server 2012 R2
Server and Core (with the Hyper-V role enabled), and Hyper-V Server
Hyper-V configuration

The only OpenStack services required on a Hyper-V node are nova-compute and neutron-hyperv-agent. Regarding the resources needed for this host you have to consider that Hyper-V will require 16 GB - 20 GB of disk space for the OS itself, including updates. Two NICs are required, one connected to the management network and one to the guest data network.

The following sections discuss how to prepare the Windows Hyper-V node for operation as an OpenStack compute node. Unless stated otherwise, any configuration information should work for the Windows 2012 and 2012 R2 platforms.

Local storage considerations

The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for all, or partition it into an OS volume and VM volume.

Configure NTP

Network time services must be configured to ensure proper operation of the OpenStack nodes. To set network time on your Windows host you must run the following commands:

C:\>net stop w32time
C:\>w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL
C:\>net start w32time

Keep in mind that the node will have to be time synchronized with the other nodes of your OpenStack environment, so it is important to use the same NTP server. Note that in case of an Active Directory environment, you may do this only for the AD Domain Controller.

Configure Hyper-V virtual switching

Information regarding the Hyper-V virtual Switch can be located here: http://technet.microsoft.com/en-us/library/hh831823.aspx

To quickly enable an interface to be used as a Virtual Interface the following PowerShell may be used:

PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false

Note

It is very important to make sure that when you are using a Hyper-V node with only 1 NIC the -AllowManagementOS option is set on True, otherwise you will lose connectivity to the Hyper-V node.

Enable iSCSI initiator service

To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and started automatically.

PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
PS C:\> Start-Service MSiSCSI
Configure shared nothing live migration

Detailed information on the configuration of live migration can be found here: http://technet.microsoft.com/en-us/library/jj134199.aspx

The following outlines the steps of shared nothing live migration.

  1. The target host ensures that live migration is enabled and properly configured in Hyper-V.
  2. The target host checks if the image to be migrated requires a base VHD and pulls it from the Image service if not already available on the target host.
  3. The source host ensures that live migration is enabled and properly configured in Hyper-V.
  4. The source host initiates a Hyper-V live migration.
  5. The source host communicates to the manager the outcome of the operation.

The following three configuration options are needed in order to support Hyper-V live migration and must be added to your nova.conf on the Hyper-V compute node:

  • This is needed to support shared nothing Hyper-V live migrations. It is used in nova/compute/manager.py.

    instances_shared_storage = False
    
  • This flag is needed to support live migration to hosts with different CPU features. This flag is checked during instance creation in order to limit the CPU features used by the VM.

    limit_cpu_features = True
    
  • This option is used to specify where instances are stored on disk.

    instances_path = DRIVELETTER:\PATH\TO\YOUR\INSTANCES
    

Additional Requirements:

  • Hyper-V 2012 R2 or Windows Server 2012 R2 with Hyper-V role enabled
  • A Windows domain controller with the Hyper-V compute nodes as domain members
  • The instances_path command-line option/flag needs to be the same on all hosts
  • The openstack-compute service deployed with the setup must run with domain credentials. You can set the service credentials with:
C:\>sc config openstack-compute obj="DOMAIN\username" password="password"
How to setup live migration on Hyper-V

To enable ‘shared nothing live’ migration, run the 3 PowerShell instructions below on each Hyper-V host:

PS C:\> Enable-VMMigration
PS C:\> Set-VMMigrationNetwork IP_ADDRESS
PS C:\> Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos

Note

Please replace the IP_ADDRESS with the address of the interface which will provide live migration.

Additional Reading

This article clarifies the various live migration options in Hyper-V:

http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html

Install nova-compute using OpenStack Hyper-V installer

In case you want to avoid all the manual setup, you can use Cloudbase Solutions’ installer. You can find it here:

https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi

The tool installs an independent Python environment in order to avoid conflicts with existing applications, and dynamically generates a nova.conf file based on the parameters provided by you.

The tool can also be used for an automated and unattended mode for deployments on a massive number of servers. More details about how to use the installer and its features can be found here:

https://www.cloudbase.it

Requirements
Python

Python 2.7 32bit must be installed as most of the libraries are not working properly on the 64bit version.

Setting up Python prerequisites

  1. Download and install Python 2.7 using the MSI installer from here:

    http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi

    PS C:\> $src = "http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi"
    PS C:\> $dest = "$env:temp\python-2.7.3.msi"
    PS C:\> Invoke-WebRequest –Uri $src –OutFile $dest
    PS C:\> Unblock-File $dest
    PS C:\> Start-Process $dest
    
  2. Make sure that the Python and Python\Scripts paths are set up in the PATH environment variable.

    PS C:\> $oldPath = [System.Environment]::GetEnvironmentVariable("Path")
    PS C:\> $newPath = $oldPath + ";C:\python27\;C:\python27\Scripts\"
    PS C:\> [System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User
    
Python dependencies

The following packages need to be downloaded and manually installed:

setuptools
http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe
pip
https://pip.pypa.io/en/latest/installing/
PyMySQL
http://codegood.com/download/10/
PyWin32
http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe
Greenlet
http://www.lfd.uci.edu/~gohlke/pythonlibs/#greenlet
PyCryto
http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe

The following packages must be installed with pip:

  • ecdsa
  • amqp
  • wmi
PS C:\> pip install ecdsa
PS C:\> pip install amqp
PS C:\> pip install wmi
Other dependencies

qemu-img is required for some of the image related operations. You can get it from here: http://qemu.weilnetz.de/. You must make sure that the qemu-img path is set in the PATH environment variable.

Some Python packages need to be compiled, so you may use MinGW or Visual Studio. You can get MinGW from here: http://sourceforge.net/projects/mingw/. You must configure which compiler is to be used for this purpose by using the distutils.cfg file in $Python27\Lib\distutils, which can contain:

[build]
compiler = mingw32

As a last step for setting up MinGW, make sure that the MinGW binaries’ directories are set up in PATH.

Install nova-compute
Download the nova code
  1. Use Git to download the necessary source code. The installer to run Git on Windows can be downloaded here:

    https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe

  2. Download the installer. Once the download is complete, run the installer and follow the prompts in the installation wizard. The default should be acceptable for the purposes of this guide.

    PS C:\> $src = "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe"
    PS C:\> $dest = "$env:temp\Git-1.9.2-preview20140411.exe"
    PS C:\> Invoke-WebRequest –Uri $src –OutFile $dest
    PS C:\> Unblock-File $dest
    PS C:\> Start-Process $dest
    
  3. Run the following to clone the nova code.

    PS C:\> git.exe clone https://git.openstack.org/openstack/nova
    
Install nova-compute service

To install nova-compute, run:

PS C:\> cd c:\nova
PS C:\> python setup.py install
Configure nova-compute

The nova.conf file must be placed in C:\etc\nova for running OpenStack on Hyper-V. Below is a sample nova.conf for Windows:

[DEFAULT]
auth_strategy = keystone
image_service = nova.image.glance.GlanceImageService
compute_driver = nova.virt.hyperv.driver.HyperVDriver
volume_api_class = nova.volume.cinder.API
fake_network = true
instances_path = C:\Program Files (x86)\OpenStack\Instances
glance_api_servers = IP_ADDRESS:9292
use_cow_images = true
force_config_drive = false
injected_network_template = C:\Program Files (x86)\OpenStack\Nova\etc\interfaces.template
policy_file = C:\Program Files (x86)\OpenStack\Nova\etc\policy.json
mkisofs_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\mkisofs.exe
allow_resize_to_same_host = true
running_deleted_instance_action = reap
running_deleted_instance_poll_interval = 120
resize_confirm_window = 5
resume_guests_state_on_host_boot = true
rpc_response_timeout = 1800
lock_path = C:\Program Files (x86)\OpenStack\Log\
rpc_backend = nova.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = Passw0rd
logdir = C:\Program Files (x86)\OpenStack\Log\
logfile = nova-compute.log
instance_usage_audit = true
instance_usage_audit_period = hour
use_neutron = True
[neutron]
url = http://IP_ADDRESS:9696
auth_strategy = keystone
admin_tenant_name = service
admin_username = neutron
admin_password = Passw0rd
admin_auth_url = http://IP_ADDRESS:35357/v2.0
[hyperv]
vswitch_name = newVSwitch0
limit_cpu_features = false
config_drive_inject_password = false
qemu_img_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom = true
dynamic_memory_ratio = 1
enable_instance_metrics_collection = true
[rdp]
enabled = true
html5_proxy_base_url = https://IP_ADDRESS:4430

The table Description of HyperV configuration options contains a reference of all options for hyper-v.

Prepare images for use with Hyper-V

Hyper-V currently supports only the VHD and VHDX file format for virtual machine instances. Detailed instructions for installing virtual machines on Hyper-V can be found here:

http://technet.microsoft.com/en-us/library/cc772480.aspx

Once you have successfully created a virtual machine, you can then upload the image to glance using the native glance-client:

PS C:\> glance image-create --name "VM_IMAGE_NAME" --is-public False
          --container-format bare --disk-format vhd

Note

VHD and VHDX files sizes can be bigger than their maximum internal size, as such you need to boot instances using a flavor with a slightly bigger disk size than the internal size of the disk file. To create VHDs, use the following PowerShell cmdlet:

PS C:\> New-VHD DISK_NAME.vhd -SizeBytes VHD_SIZE
Inject interfaces and routes

The interfaces.template file describes the network interfaces and routes available on your system and how to activate them. You can specify the location of the file with the injected_network_template configuration option in /etc/nova/nova.conf.

injected_network_template = PATH_TO_FILE

A default template exists in nova/virt/interfaces.template.

Run Compute with Hyper-V

To start the nova-compute service, run this command from a console in the Windows server:

PS C:\> C:\Python27\python.exe c:\Python27\Scripts\nova-compute --config-file c:\etc\nova\nova.conf
Troubleshoot Hyper-V configuration
  • I ran the nova-manage service list command from my controller; however, I’m not seeing smiley faces for Hyper-V compute nodes, what do I do?

    Verify that you are synchronized with a network time source. For instructions about how to configure NTP on your Hyper-V compute node, see Configure NTP.

  • How do I restart the compute service?

    PS C:\> net stop nova-compute && net start nova-compute
    
  • How do I restart the iSCSI initiator service?

    PS C:\> net stop msiscsi && net start msiscsi
    
Virtuozzo

Virtuozzo, or its community edition OpenVZ, provides both types of virtualization: Kernel Virtual Machines and OS Containers. The type of instance to span is chosen depending on the hw_vm_type property of an image.

Note

Some OpenStack Compute features may be missing when running with Virtuozzo as the hypervisor. See the hypervisor support matrix for details.

To enable Virtuozzo Containers, set the following options in /etc/nova/nova.conf on all hosts running the nova-compute service.

compute_driver = libvirt.LibvirtDriver
force_raw_images = False

[libvirt]
virt_type = parallels
images_type = ploop
connection_uri = parallels:///system
inject_partition = -2

To enable Virtuozzo Virtual Machines, set the following options in /etc/nova/nova.conf on all hosts running the nova-compute service.

compute_driver = libvirt.LibvirtDriver

[libvirt]
virt_type = parallels
images_type = qcow2
connection_uri = parallels:///system

OpenStack Compute supports many hypervisors, which might make it difficult for you to choose one. Most installations use only one hypervisor. However, you can use ComputeFilter and ImagePropertiesFilter to schedule different hypervisors within the same installation. The following links help you choose a hypervisor. See http://docs.openstack.org/developer/nova/support-matrix.html for a detailed list of features and support across the hypervisors.

The following hypervisors are supported:

  • KVM - Kernel-based Virtual Machine. The virtual disk formats that it supports is inherited from QEMU since it uses a modified QEMU program to launch the virtual machine. The supported formats include raw images, the qcow2, and VMware formats.
  • LXC - Linux Containers (through libvirt), used to run Linux-based virtual machines.
  • QEMU - Quick EMUlator, generally only used for development purposes.
  • UML - User Mode Linux, generally only used for development purposes.
  • VMware vSphere 5.1.0 and newer, runs VMware-based Linux and Windows images through a connection with a vCenter server.
  • Xen (using libvirt) - Xen Project Hypervisor using libvirt as management interface into nova-compute to run Linux, Windows, FreeBSD and NetBSD virtual machines.
  • XenServer - XenServer, Xen Cloud Platform (XCP) and other XAPI based Xen variants runs Linux or Windows virtual machines. You must install the nova-compute service in a para-virtualized VM.
  • Hyper-V - Server virtualization with Microsoft Hyper-V, use to run Windows, Linux, and FreeBSD virtual machines. Runs nova-compute natively on the Windows virtualization platform.
  • Virtuozzo - OS Containers and Kernel-based Virtual Machines supported via libvirt virt_type=parallels. The supported formats include ploop and qcow2 images.

Compute schedulers

Compute uses the nova-scheduler service to determine how to dispatch compute requests. For example, the nova-scheduler service determines on which host a VM should launch. In the context of filters, the term host means a physical node that has a nova-compute service running on it. You can configure the scheduler through a variety of options.

Compute is configured with the following default scheduler options in the /etc/nova/nova.conf file:

scheduler_driver_task_period = 60
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter

By default, the scheduler_driver is configured as a filter scheduler, as described in the next section. In the default configuration, this scheduler considers hosts that meet all the following criteria:

  • Have not been attempted for scheduling purposes (RetryFilter).
  • Are in the requested availability zone (AvailabilityZoneFilter).
  • Have sufficient RAM available (RamFilter).
  • Have sufficient disk space available for root and ephemeral storage (DiskFilter).
  • Can service the request (ComputeFilter).
  • Satisfy the extra specs associated with the instance type (ComputeCapabilitiesFilter).
  • Satisfy any architecture, hypervisor type, or virtual machine mode properties specified on the instance’s image properties (ImagePropertiesFilter).
  • Are on a different host than other instances of a group (if requested) (ServerGroupAntiAffinityFilter).
  • Are in a set of group hosts (if requested) (ServerGroupAffinityFilter).

The scheduler caches its list of available hosts; use the scheduler_driver_task_period option to specify how often the list is updated.

Note

Do not configure service_down_time to be much smaller than scheduler_driver_task_period; otherwise, hosts appear to be dead while the host list is being cached.

For information about the volume scheduler, see the Block Storage section of OpenStack Administrator Guide.

The scheduler chooses a new host when an instance is migrated.

When evacuating instances from a host, the scheduler service honors the target host defined by the administrator on the nova evacuate command. If a target is not defined by the administrator, the scheduler determines the target host. For information about instance evacuation, see Evacuate instances section of the OpenStack Administrator Guide.

Filter scheduler

The filter scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is the default scheduler for scheduling virtual machine instances. It supports filtering and weighting to make informed decisions on where a new instance should be created.

When the filter scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in the Weights section.

Filtering

_images/filteringWorkflow1.png

The scheduler_available_filters configuration option in nova.conf provides the Compute service with the list of the filters that are used by the scheduler. The default setting specifies all of the filter that are included with the Compute service:

scheduler_available_filters = nova.scheduler.filters.all_filters

This configuration option can be specified multiple times. For example, if you implemented your own custom filter in Python called myfilter.MyFilter and you wanted to use both the built-in filters and your custom filter, your nova.conf file would contain:

scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = myfilter.MyFilter

The scheduler_default_filters configuration option in nova.conf defines the list of filters that are applied by the nova-scheduler service. The default filters are:

scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
Compute filters

The following sections describe the available compute filters.

AggregateCoreFilter

Filters host by CPU core numbers with a per-aggregate cpu_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see Host aggregates and availability zones. See also CoreFilter.

AggregateDiskFilter

Filters host by disk allocation with a per-aggregate disk_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see Host aggregates and availability zones. See also DiskFilter.

AggregateImagePropertiesIsolation

Matches properties defined in an image’s metadata against those of aggregates to determine host matches:

  • If a host belongs to an aggregate and the aggregate defines one or more metadata that matches an image’s properties, that host is a candidate to boot the image’s instance.
  • If a host does not belong to any aggregate, it can boot instances from all images.

For example, the following aggregate myWinAgg has the Windows operating system as metadata (named ‘windows’):

$ nova aggregate-details MyWinAgg
+----+----------+-------------------+------------+---------------+
| Id | Name     | Availability Zone | Hosts      | Metadata      |
+----+----------+-------------------+------------+---------------+
| 1  | MyWinAgg | None              | 'sf-devel' | 'os=windows'  |
+----+----------+-------------------+------------+---------------+

In this example, because the following Win-2012 image has the windows property, it boots on the sf-devel host (all other filters being equal):

$ glance image-show Win-2012
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| Property 'os'    | windows                              |
| checksum         | f8a2eeee2dc65b3d9b6e63678955bd83     |
| container_format | ami                                  |
| created_at       | 2013-11-14T13:24:25                  |
| ...

You can configure the AggregateImagePropertiesIsolation filter by using the following options in the nova.conf file:

# Considers only keys matching the given namespace (string).
# Multiple values can be given, as a comma-separated list.
aggregate_image_properties_isolation_namespace = <None>

# Separator used between the namespace and keys (string).
aggregate_image_properties_isolation_separator = .
AggregateInstanceExtraSpecsFilter

Matches properties defined in extra specs for an instance type against admin-defined properties on a host aggregate. Works with specifications that are scoped with aggregate_instance_extra_specs. Multiple values can be given, as a comma-separated list. For backward compatibility, also works with non-scoped specifications; this action is highly discouraged because it conflicts with ComputeCapabilitiesFilter filter when you enable both filters. For information about how to use this filter, see the Host aggregates and availability zones section.

AggregateIoOpsFilter

Filters host by disk allocation with a per-aggregate max_io_ops_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this filter, see Host aggregates and availability zones. See also IoOpsFilter.

AggregateMultiTenancyIsolation

Ensures that the tenant (or list of tenants) creates all instances only on specific Host aggregates and availability zones. If a host is in an aggregate that has the filter_tenant_id metadata key, the host creates instances from only that tenant or list of tenants. A host can be in different aggregates. If a host does not belong to an aggregate with the metadata key, the host can create instances from all tenants. This setting does not isolate the aggregate from other tenants. Any other tenant can continue to build instances on the specified aggregate.

AggregateNumInstancesFilter

Filters host by number of instances with a per-aggregate max_instances_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to use this filter, see Host aggregates and availability zones. See also NumInstancesFilter.

AggregateRamFilter

Filters host by RAM allocation of instances with a per-aggregate ram_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to use this filter, see Host aggregates and availability zones. See also RamFilter.

AggregateTypeAffinityFilter

This filter passes hosts if no instance_type key is set or the instance_type aggregate metadata value contains the name of the instance_type requested. The value of the instance_type metadata entry is a string that may contain either a single instance_type name or a comma-separated list of instance_type names, such as m1.nano or m1.nano,m1.small. For information about how to use this filter, see Host aggregates and availability zones. See also TypeAffinityFilter.

AllHostsFilter

This is a no-op filter. It does not eliminate any of the available hosts.

AvailabilityZoneFilter

Filters hosts by availability zone. You must enable this filter for the scheduler to respect availability zones in requests.

ComputeCapabilitiesFilter

Matches properties defined in extra specs for an instance type against compute capabilities. If an extra specs key contains a colon (:), anything before the colon is treated as a namespace and anything after the colon is treated as the key to be matched. If a namespace is present and is not capabilities, the filter ignores the namespace. For backward compatibility, also treats the extra specs key as the key to be matched if no namespace is present; this action is highly discouraged because it conflicts with AggregateInstanceExtraSpecsFilter filter when you enable both filters.

ComputeFilter

Passes all hosts that are operational and enabled.

In general, you should always enable this filter.

CoreFilter

Only schedules instances on hosts if sufficient CPU cores are available. If this filter is not set, the scheduler might over-provision a host based on cores. For example, the virtual cores running on an instance may exceed the physical cores.

You can configure this filter to enable a fixed amount of vCPU overcommitment by using the cpu_allocation_ratio configuration option in nova.conf. The default setting is:

cpu_allocation_ratio = 16.0

With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU to be run on that node.

To disallow vCPU overcommitment set:

cpu_allocation_ratio = 1.0

Note

The Compute API always returns the actual number of CPU cores available on a compute node regardless of the value of the cpu_allocation_ratio configuration key. As a result changes to the cpu_allocation_ratio are not reflected via the command line clients or the dashboard. Changes to this configuration key are only taken into account internally in the scheduler.

DifferentHostFilter

Schedules the instance on a different host from a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using different_host as the key and a list of instance UUIDs as the value. This filter is the opposite of the SameHostFilter. Using the nova command-line client, use the --hint flag. For example:

$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
  --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
  --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1

With the API, use the os:scheduler_hints key. For example:

{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "different_host": [
            "a0cf03a5-d921-4877-bb5c-86d26cf818e1",
            "8c19174f-4220-44f0-824a-cd1eeef10287"
        ]
    }
}
DiskFilter

Only schedules instances on hosts if there is sufficient disk space available for root and ephemeral storage.

You can configure this filter to enable a fixed amount of disk overcommitment by using the disk_allocation_ratio configuration option in the nova.conf configuration file. The default setting disables the possibility of the overcommitment and allows launching a VM only if there is a sufficient amount of disk space available on a host:

disk_allocation_ratio = 1.0

DiskFilter always considers the value of the disk_available_least property and not the one of the free_disk_gb property of a hypervisor’s statistics:

$ nova hypervisor-stats
+----------------------+-------+
| Property             | Value |
+----------------------+-------+
| count                |  1    |
| current_workload     |  0    |
| disk_available_least |  29   |
| free_disk_gb         |  35   |
| free_ram_mb          |  3441 |
| local_gb             |  35   |
| local_gb_used        |  0    |
| memory_mb            |  3953 |
| memory_mb_used       |  512  |
| running_vms          |  0    |
| vcpus                |  2    |
| vcpus_used           |  0    |
+----------------------+-------+

As it can be viewed from the command output above, the amount of the available disk space can be less than the amount of the free disk space. It happens because the disk_available_least property accounts for the virtual size rather than the actual size of images. If you use an image format that is sparse or copy on write so that each virtual instance does not require a 1:1 allocation of a virtual disk to a physical storage, it may be useful to allow the overcommitment of disk space.

To enable scheduling instances while overcommitting disk resources on the node, adjust the value of the disk_allocation_ratio configuration option to greater than 1.0:

disk_allocation_ratio > 1.0

Note

If the value is set to >1, we recommend keeping track of the free disk space, as the value approaching 0 may result in the incorrect functioning of instances using it at the moment.

ExactCoreFilter

Only schedules instances on hosts if host has the exact number of CPU cores.

ExactDiskFilter

Only schedules instances on hosts if host has the exact amount of disk available.

ExactRamFilter

Only schedules instances on hosts if host has the exact number of RAM available.

ExactCoreFilter

Only schedules instances on hosts if host has the exact number of CPU cores.

GroupAffinityFilter

Note

This filter is deprecated in favor of ServerGroupAffinityFilter.

The GroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this filter, the requester must pass a scheduler hint, using group as the key and an arbitrary name as the value. Using the nova command-line client, use the --hint flag. For example:

$ nova boot --image IMAGE_ID --flavor 1 --hint group=foo server-1

This filter should not be enabled at the same time as GroupAntiAffinityFilter or neither filter will work properly.

GroupAntiAffinityFilter

Note

This filter is deprecated in favor of ServerGroupAntiAffinityFilter.

The GroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must pass a scheduler hint, using group as the key and an arbitrary name as the value. Using the nova command-line client, use the --hint flag. For example:

$ nova boot --image IMAGE_ID --flavor 1 --hint group=foo server-1

This filter should not be enabled at the same time as GroupAffinityFilter or neither filter will work properly.

ImagePropertiesFilter

Filters hosts based on properties defined on the instance’s image. It passes hosts that can support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, hypervisor version (for Xen hypervisor type only), and virtual machine mode.

For example, an instance might require a host that runs an ARM-based processor, and QEMU as the hypervisor. You can decorate an image with these properties by using:

$ glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu

The image properties that the filter checks for are:

architecture
describes the machine architecture required by the image. Examples are i686, x86_64, arm, and ppc64.
hypervisor_type

describes the hypervisor required by the image. Examples are xen, qemu, and xenapi.

Note

qemu is used for both QEMU and KVM hypervisor types.

hypervisor_version_requires

describes the hypervisor version required by the image. The property is supported for Xen hypervisor type only. It can be used to enable support for multiple hypervisor versions, and to prevent instances with newer Xen tools from being provisioned on an older version of a hypervisor. If available, the property value is compared to the hypervisor version of the compute host.

To filter the hosts by the hypervisor version, add the hypervisor_version_requires property on the image as metadata and pass an operator and a required hypervisor version as its value:

$ glance image-update img-uuid --property hypervisor_type=xen --property hypervisor_version_requires=">=4.3"
vm_mode
describes the hypervisor application binary interface (ABI) required by the image. Examples are xen for Xen 3.0 paravirtual ABI, hvm for native ABI, uml for User Mode Linux paravirtual ABI, exe for container virt executable ABI.
IsolatedHostsFilter

Allows the admin to define a special (isolated) set of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run isolated images. The flag restrict_isolated_hosts_to_isolated_images can be used to force isolated hosts to only run isolated images.

The admin must specify the isolated set of images and hosts in the nova.conf file using the isolated_hosts and isolated_images configuration options. For example:

isolated_hosts = server1, server2
isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
IoOpsFilter

The IoOpsFilter filters hosts by concurrent I/O operations on it. Hosts with too many concurrent I/O operations will be filtered out. The max_io_ops_per_host option specifies the maximum number of I/O intensive instances allowed to run on a host. A host will be ignored by the scheduler if more than max_io_ops_per_host instances in build, resize, snapshot, migrate, rescue or unshelve task states are running on it.

JsonFilter

The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format. The following operators are supported:

  • =
  • <
  • >
  • in
  • <=
  • >=
  • not
  • or
  • and

The filter supports the following variables:

  • $free_ram_mb
  • $free_disk_mb
  • $total_usable_ram_mb
  • $vcpus_total
  • $vcpus_used

Using the nova command-line client, use the --hint flag:

$ nova boot --image 827d564a-e636-4fc4-a376-d36f7ebe1747 \
  --flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1

With the API, use the os:scheduler_hints key:

{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "query": "[>=,$free_ram_mb,1024]"
    }
}
MetricsFilter

Filters hosts based on meters weight_setting. Only hosts with the available meters are passed so that the metrics weigher will not fail due to these hosts.

NUMATopologyFilter

Filters hosts based on the NUMA topology that was specified for the instance through the use of flavor extra_specs in combination with the image properties, as described in detail in the related nova-spec document. Filter will try to match the exact NUMA cells of the instance to those of the host. It will consider the standard over-subscription limits each cell, and provide limits to the compute host accordingly.

Note

If instance has no topology defined, it will be considered for any host. If instance has a topology defined, it will be considered only for NUMA capable hosts.

NumInstancesFilter

Hosts that have more instances running than specified by the max_instances_per_host option are filtered out when this filter is in place.

PciPassthroughFilter

The filter schedules instances on a host if the host has devices that meet the device requests in the extra_specs attribute for the flavor.

RamFilter

Only schedules instances on hosts that have sufficient RAM available. If this filter is not set, the scheduler may over provision a host based on RAM (for example, the RAM allocated by virtual machine instances may exceed the physical RAM).

You can configure this filter to enable a fixed amount of RAM overcommitment by using the ram_allocation_ratio configuration option in nova.conf. The default setting is:

ram_allocation_ratio = 1.5

This setting enables 1.5&nbsp;GB instances to run on any compute node with 1 GB of free RAM.

RetryFilter

Filters out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to respond to the request, this filter prevents the scheduler from retrying that host for the service request.

This filter is only useful if the scheduler_max_attempts configuration option is set to a value greater than zero.

SameHostFilter

Schedules the instance on the same host as another instance in a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using same_host as the key and a list of instance UUIDs as the value. This filter is the opposite of the DifferentHostFilter. Using the nova command-line client, use the --hint flag:

$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
  --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
  --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1

With the API, use the os:scheduler_hints key:

{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "same_host": [
            "a0cf03a5-d921-4877-bb5c-86d26cf818e1",
            "8c19174f-4220-44f0-824a-cd1eeef10287"
        ]
    }
}
ServerGroupAffinityFilter

The ServerGroupAffinityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this filter, the requester must create a server group with an affinity policy, and pass a scheduler hint, using group as the key and the server group UUID as the value. Using the nova command-line tool, use the --hint flag. For example:

$ nova server-group-create --policy affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID server-1
ServerGroupAntiAffinityFilter

The ServerGroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must create a server group with an anti-affinity policy, and pass a scheduler hint, using group as the key and the server group UUID as the value. Using the nova command-line client, use the --hint flag. For example:

$ nova server-group-create --policy anti-affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID server-1
SimpleCIDRAffinityFilter

Schedules the instance based on host IP subnet range. To take advantage of this filter, the requester must specify a range of valid IP address in CIDR format, by passing two scheduler hints:

build_near_host_ip
The first IP address in the subnet (for example, 192.168.1.1)
cidr
The CIDR that corresponds to the subnet (for example, /24)

Using the nova command-line client, use the --hint flag. For example, to specify the IP subnet 192.168.1.1/24:

$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
  --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1

With the API, use the os:scheduler_hints key:

{
    "server": {
        "name": "server-1",
        "imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
        "flavorRef": "1"
    },
    "os:scheduler_hints": {
        "build_near_host_ip": "192.168.1.1",
        "cidr": "24"
    }
}
TrustedFilter

Filters hosts based on their trust. Only passes hosts that meet the trust requirements specified in the instance properties.

TypeAffinityFilter

Dynamically limits hosts to one instance type. An instance can only be launched on a host, if no instance with different instances types are running on it, or if the host has no running instances at all.

Cell filters

The following sections describe the available cell filters.

DifferentCellFilter

Schedules the instance on a different cell from a set of instances. To take advantage of this filter, the requester must pass a scheduler hint, using different_cell as the key and a list of instance UUIDs as the value.

ImagePropertiesFilter

Filters cells based on properties defined on the instance’s image. This filter works specifying the hypervisor required in the image metadata and the supported hypervisor version in cell capabilities.

TargetCellFilter

Filters target cells. This filter works by specifying a scheduler hint of target_cell. The value should be the full cell path.

Weights

When resourcing instances, the filter scheduler filters and weights each host in the list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes resources on it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of instances, because weight is computed for each requested instance.

All weights are normalized before being summed up; the host with the largest weight is given the highest priority.

Weighting hosts

_images/nova-weighting-hosts.png

If cells are used, cells are weighted by the scheduler in the same manner as hosts.

Hosts and cells are weighted based on the following options in the /etc/nova/nova.conf file:

Host weighting options
Section Option Description
[DEFAULT] ram_weight_multiplier By default, the scheduler spreads instances across all hosts evenly. Set the ram_weight_multiplier option to a negative number if you prefer stacking instead of spreading. Use a floating-point value.
[DEFAULT] scheduler_host_subset_size New instances are scheduled on a host that is chosen randomly from a subset of the N best hosts. This property defines the subset size from which a host is chosen. A value of 1 chooses the first host returned by the weighting functions. This value must be at least 1. A value less than 1 is ignored, and 1 is used instead. Use an integer value.
[DEFAULT] scheduler_weight_classes Defaults to nova.scheduler.weights.all_weighers. Hosts are then weighted and sorted with the largest weight winning.
[DEFAULT] io_ops_weight_multiplier Multiplier used for weighing host I/O operations. A negative value means a preference to choose light workload compute hosts.
[DEFAULT] soft_affinity_weight_multiplier Multiplier used for weighing hosts for group soft-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-anti-affinity.
[DEFAULT] soft_anti_affinity_weight_multiplier Multiplier used for weighing hosts for group soft-anti-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-affinity.
[metrics] weight_multiplier Multiplier for weighting meters. Use a floating-point value.
[metrics] weight_setting Determines how meters are weighted. Use a comma-separated list of metricName=ratio. For example: name1=1.0, name2=-1.0 results in: name1.value * 1.0 + name2.value * -1.0
[metrics] required

Specifies how to treat unavailable meters:

  • True - Raises an exception. To avoid the raised exception, you should use the scheduler filter MetricFilter to filter out hosts with unavailable meters.
  • False - Treated as a negative factor in the weighting process (uses the weight_of_unavailable option).
[metrics] weight_of_unavailable If required is set to False, and any one of the meters set by weight_setting is unavailable, the weight_of_unavailable value is returned to the scheduler.

For example:

[DEFAULT]
scheduler_host_subset_size = 1
scheduler_weight_classes = nova.scheduler.weights.all_weighers
ram_weight_multiplier = 1.0
io_ops_weight_multiplier = 2.0
soft_affinity_weight_multiplier = 1.0
soft_anti_affinity_weight_multiplier = 1.0
[metrics]
weight_multiplier = 1.0
weight_setting = name1=1.0, name2=-1.0
required = false
weight_of_unavailable = -10000.0
Cell weighting options
Section Option Description
[cells] mute_weight_multiplier Multiplier to weight mute children (hosts which have not sent capacity or capacity updates for some time). Use a negative, floating-point value.
[cells] offset_weight_multiplier Multiplier to weight cells, so you can specify a preferred cell. Use a floating point value.
[cells] ram_weight_multiplier By default, the scheduler spreads instances across all cells evenly. Set the ram_weight_multiplier option to a negative number if you prefer stacking instead of spreading. Use a floating-point value.
[cells] scheduler_weight_classes Defaults to nova.cells.weights.all_weighers, which maps to all cell weighers included with Compute. Cells are then weighted and sorted with the largest weight winning.

For example:

[cells]
scheduler_weight_classes = nova.cells.weights.all_weighers
mute_weight_multiplier = -10.0
ram_weight_multiplier = 1.0
offset_weight_multiplier = 1.0
Chance scheduler

As an administrator, you work with the filter scheduler. However, the Compute service also uses the Chance Scheduler, nova.scheduler.chance.ChanceScheduler, which randomly selects from lists of filtered hosts.

Utilization aware scheduling

It is possible to schedule VMs using advanced scheduling decisions. These decisions are made based on enhanced usage statistics encompassing data like memory cache utilization, memory bandwidth utilization, or network bandwidth utilization. This is disabled by default. The administrator can configure how the metrics are weighted in the configuration file by using the weight_setting configuration option in the nova.conf configuration file. For example to configure metric1 with ratio1 and metric2 with ratio2:

weight_setting = "metric1=ratio1, metric2=ratio2"
Host aggregates and availability zones

Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud, or a region of an OpenStack cloud, based on arbitrary characteristics. Examples where an administrator may want to do this include where a group of hosts have additional hardware or performance characteristics.

Host aggregates are not explicitly exposed to users. Instead administrators map flavors to host aggregates. Administrators do this by setting metadata on a host aggregate, and matching flavor extra specifications. The scheduler then endeavors to match user requests for instance of the given flavor to a host aggregate with the same key-value pair in its metadata. Compute nodes can be in more than one host aggregate.

Administrators are able to optionally expose a host aggregate as an availability zone. Availability zones are different from host aggregates in that they are explicitly exposed to the user, and hosts can only be in a single availability zone. Administrators can configure a default availability zone where instances will be scheduled when the user fails to specify one.

Command-line interface

The nova command-line client supports the following aggregate-related commands.

nova aggregate-list
Print a list of all aggregates.
nova aggregate-create <name> [availability-zone]
Create a new aggregate named <name>, and optionally in availability zone [availability-zone] if specified. The command returns the ID of the newly created aggregate. Hosts can be made available to multiple host aggregates. Be careful when adding a host to an additional host aggregate when the host is also in an availability zone. Pay attention when using the nova aggregate-set-metadata and nova aggregate-update commands to avoid user confusion when they boot instances in different availability zones. An error occurs if you cannot add a particular host to an aggregate zone for which it is not intended.
nova aggregate-delete <id>
Delete an aggregate with id <id>.
nova aggregate-details <id>
Show details of the aggregate with id <id>.
nova aggregate-add-host <id> <host>
Add host with name <host> to aggregate with id <id>.
nova aggregate-remove-host <id> <host>
Remove the host with name <host> from the aggregate with id <id>.
nova aggregate-set-metadata <id> <key=value> [<key=value> ...]
Add or update metadata (key-value pairs) associated with the aggregate with id <id>.
nova aggregate-update <id> <name> [<availability_zone>]
Update the name and availability zone (optional) for the aggregate.
nova host-list
List all hosts by service.
nova host-update –maintenance [enable | disable]
Put/resume host into/from maintenance.

Note

Only administrators can access these commands. If you try to use these commands and the user name and tenant that you use to access the Compute service do not have the admin role or the appropriate privileges, these errors occur:

ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b4864)
ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
Configure scheduler to support host aggregates

One common use case for host aggregates is when you want to support scheduling instances to a subset of compute hosts because they have a specific capability. For example, you may want to allow users to request compute hosts that have SSD drives if they need access to faster disk I/O, or access to compute hosts that have GPU cards to take advantage of GPU-accelerated code.

To configure the scheduler to support host aggregates, the scheduler_default_filters configuration option must contain the AggregateInstanceExtraSpecsFilter in addition to the other filters used by the scheduler. Add the following line to /etc/nova/nova.conf on the host that runs the nova-scheduler service to enable host aggregates filtering, as well as the other filters that are typically enabled:

scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
Example: Specify compute hosts with SSDs

This example configures the Compute service to enable users to request nodes that have solid-state drives (SSDs). You create a fast-io host aggregate in the nova availability zone and you add the ssd=true key-value pair to the aggregate. Then, you add the node1, and node2 compute nodes to it.

$ nova aggregate-create fast-io nova
+----+---------+-------------------+-------+----------+
| Id | Name    | Availability Zone | Hosts | Metadata |
+----+---------+-------------------+-------+----------+
| 1  | fast-io | nova              |       |          |
+----+---------+-------------------+-------+----------+

$ nova aggregate-set-metadata 1 ssd=true
+----+---------+-------------------+-------+-------------------+
| Id | Name    | Availability Zone | Hosts | Metadata          |
+----+---------+-------------------+-------+-------------------+
| 1  | fast-io | nova              | []    | {u'ssd': u'true'} |
+----+---------+-------------------+-------+-------------------+

$ nova aggregate-add-host 1 node1
+----+---------+-------------------+------------+-------------------+
| Id | Name    | Availability Zone | Hosts      | Metadata          |
+----+---------+-------------------+------------+-------------------+
| 1  | fast-io | nova              | [u'node1'] | {u'ssd': u'true'} |
+----+---------+-------------------+------------+-------------------+

$ nova aggregate-add-host 1 node2
+----+---------+-------------------+----------------------+-------------------+
| Id | Name    | Availability Zone | Hosts                | Metadata          |
+----+---------+-------------------+----------------------+-------------------+
| 1  | fast-io | nova              | [u'node1', u'node2'] | {u'ssd': u'true'} |
+----+---------+-------------------+----------------------+-------------------+

Use the nova flavor-create command to create the ssd.large flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and four vCPUs.

$ nova flavor-create ssd.large 6 8192 80 4
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | ssd.large | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

Once the flavor is created, specify one or more key-value pairs that match the key-value pairs on the host aggregates with scope aggregate_instance_extra_specs. In this case, that is the aggregate_instance_extra_specs:ssd=true key-value pair. Setting a key-value pair on a flavor is done using the nova flavor-key command.

$ nova flavor-key ssd.large set aggregate_instance_extra_specs:ssd=true

Once it is set, you should see the extra_specs property of the ssd.large flavor populated with a key of ssd and a corresponding value of true.

$ nova flavor-show ssd.large
+----------------------------+--------------------------------------------------+
| Property                   | Value                                            |
+----------------------------+--------------------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                            |
| OS-FLV-EXT-DATA:ephemeral  | 0                                                |
| disk                       | 80                                               |
| extra_specs                | {u'aggregate_instance_extra_specs:ssd': u'true'} |
| id                         | 6                                                |
| name                       | ssd.large                                        |
| os-flavor-access:is_public | True                                             |
| ram                        | 8192                                             |
| rxtx_factor                | 1.0                                              |
| swap                       |                                                  |
| vcpus                      | 4                                                |
+----------------------------+--------------------------------------------------+

Now, when a user requests an instance with the ssd.large flavor, the scheduler only considers hosts with the ssd=true key-value pair. In this example, these are node1 and node2.

XenServer hypervisor pools to support live migration

When using the XenAPI-based hypervisor, the Compute service uses host aggregates to manage XenServer Resource pools, which are used in supporting live migration.

Configuration options

The Compute scheduler configuration options are documented in the tables below.

Description of scheduler configuration options
Configuration option = Default value Description
[DEFAULT]  
aggregate_image_properties_isolation_namespace = None

(String) Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable.

Valid values are strings.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled.

  • Related options:
aggregate_image_properties_isolation_separator
aggregate_image_properties_isolation_separator = .

(String) When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used. It defaults to a period (‘.’).

Valid values are strings.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled.

  • Related options:
aggregate_image_properties_isolation_namespace
baremetal_scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ExactRamFilter, ExactDiskFilter, ExactCoreFilter

(List) This option specifies the filters used for filtering baremetal hosts. The value should be a list of strings, with each string being the name of a filter class to be used. When used, they will be applied in order, so place your most restrictive filters first to make the filtering process more efficient.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
If the ‘scheduler_use_baremetal_filters’ option is False, this option has no effect.
cpu_allocation_ratio = 0.0

(Floating point) This option helps you specify virtual CPU to physical CPU allocation ratio which affects all CPU filters.

This configuration specifies ratio for CoreFilter which can be set per compute node. For AggregateCoreFilter, it will fall back to this configuration value if no per-aggregate setting is found.

Possible values:

  • Any valid positive integer or float value * Default value is 0.0

NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 16.0’.

disk_allocation_ratio = 0.0

(Floating point) This option helps you specify virtual disk to physical disk allocation ratio used by the disk_filter.py script to determine if a host has sufficient disk space to fit a requested instance.

A ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances.

Possible values:

  • Any valid positive integer or float value * Default value is 0.0

NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.0’.

disk_weight_multiplier = 1.0 (Floating point) Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread.
io_ops_weight_multiplier = -1.0

(Floating point) This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops’ weigher is enabled.

Valid values are numeric, either integer or float.

  • Related options:
None
isolated_hosts =

(List) If there is a need to restrict some images to only run on certain designated hosts, list those host names here.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled.

  • Related options:
scheduler/isolated_images scheduler/restrict_isolated_hosts_to_isolated_images
isolated_images =

(List) If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled.

  • Related options:
scheduler/isolated_hosts scheduler/restrict_isolated_hosts_to_isolated_images
max_instances_per_host = 50

(Integer) If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The num_instances_filter will reject any host that has at least as many instances as this option’s value.

Valid values are positive integers; setting it to zero will cause all hosts to be rejected if the num_instances_filter is active.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘num_instances_filter’ filter is enabled.

  • Related options:
None
max_io_ops_per_host = 8

(Integer) This setting caps the number of instances on a host that can be actively performing IO (in a build, resize, snapshot, migrate, rescue, or unshelve task state) before that host becomes ineligible to build new instances.

Valid values are positive integers: 1 or greater.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops_filter’ filter is enabled.

  • Related options:
None
ram_allocation_ratio = 0.0

(Floating point) This option helps you specify virtual RAM to physical RAM allocation ratio which affects all RAM filters.

This configuration specifies ratio for RamFilter which can be set per compute node. For AggregateRamFilter, it will fall back to this configuration value if no per-aggregate setting found.

Possible values:

  • Any valid positive integer or float value * Default value is 0.0

NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.5.

ram_weight_multiplier = 1.0

(Floating point) This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘ram’ weigher is enabled.

Valid values are numeric, either integer or float.

  • Related options:
None
reserved_host_disk_mb = 0

(Integer) Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host.

Possible values:

  • Any positive integer representing amount of disk in MB to reserve for the host.
reserved_host_memory_mb = 512

(Integer) Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host.

Possible values:

  • Any positive integer representing amount of memory in MB to reserve for the host.
reserved_huge_pages = None

(Unknown) Reserves a number of huge/large memory pages per NUMA host cells

Possible values:

  • A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved.

reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1

In this example we are reserving on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB.

restrict_isolated_hosts_to_isolated_images = True

(Boolean) This setting determines if the scheduler’s isolated_hosts filter will allow non-isolated images on a host designated as an isolated host. When set to True (the default), non-isolated images will not be allowed to be built on isolated hosts. When False, non-isolated images can be built on both isolated and non-isolated hosts alike.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled. Even then, this option doesn’t affect the behavior of requests for isolated images, which will always be restricted to isolated hosts.

  • Related options:
scheduler/isolated_images scheduler/isolated_hosts
scheduler_available_filters = ['nova.scheduler.filters.all_filters']

(Multi-valued) This is an unordered list of the filter classes the Nova scheduler may apply. Only the filters specified in the ‘scheduler_default_filters’ option will be used, but any filter appearing in that option must also be included in this list.

By default, this is set to all filters that are included with Nova. If you wish to change this, replace this with a list of strings, where each element is the path to a filter.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
scheduler_default_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter

(List) This option is the list of filter class names that will be used for filtering hosts. The use of ‘default’ in the name of this option implies that other filters may sometimes be used, but that is not the case. These filters will be applied in the order they are listed, so place your most restrictive filters first to make the filtering process more efficient.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
All of the filters in this option must be present in the ‘scheduler_available_filters’ option, or a SchedulerHostFilterNotFound exception will be raised.
scheduler_driver = filter_scheduler

(String) The class of the driver used by the scheduler. This should be chosen from one of the entrypoints under the namespace ‘nova.scheduler.driver’ of file ‘setup.cfg’. If nothing is specified in this option, the ‘filter_scheduler’ is used.

This option also supports deprecated full Python path to the class to be used. For example, “nova.scheduler.filter_scheduler.FilterScheduler”. But note: this support will be dropped in the N Release.

Other options are:

  • ‘caching_scheduler’ which aggressively caches the system state for better individual scheduler performance at the risk of more retries when running multiple schedulers.
  • ‘chance_scheduler’ which simply picks a host at random.
  • ‘fake_scheduler’ which is used for testing.
  • Related options:
None
scheduler_driver_task_period = 60

(Integer) This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used.

If this is larger than the nova-service ‘service_down_time’ setting, Nova may report the scheduler service as down. This is because the scheduler driver is responsible for sending a heartbeat and it will only do that as often as this option allows. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler.

  • Related options:
nova-service service_down_time
scheduler_host_manager = host_manager

(String) The scheduler host manager to use, which manages the in-memory picture of the hosts that the scheduler uses.

The option value should be chosen from one of the entrypoints under the namespace ‘nova.scheduler.host_manager’ of file ‘setup.cfg’. For example, ‘host_manager’ is the default setting. Aside from the default, the only other option as of the Mitaka release is ‘ironic_host_manager’, which should be used if you’re using Ironic to provision bare-metal instances.

  • Related options:
None
scheduler_host_subset_size = 1

(Integer) New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Valid values are 1 or greater. Any value less than one will be treated as 1.

Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
None
scheduler_instance_sync_interval = 120 (Integer) Waiting time interval (seconds) between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova. If the CONF option scheduler_tracks_instance_changes is False, changing this option will have no effect.
scheduler_json_config_location =

(String) The absolute path to the scheduler configuration JSON file, if any. This file location is monitored by the scheduler for changes and reloads it if needed. It is converted from JSON to a Python data structure, and passed into the filtering and weighing functions of the scheduler, which can use it for dynamic configuration.

  • Related options:
None
scheduler_manager = nova.scheduler.manager.SchedulerManager (String) DEPRECATED: Full class name for the Manager for scheduler
scheduler_max_attempts = 3

(Integer) This is the maximum number of attempts that will be made to schedule an instance before it is assumed that the failures aren’t due to normal occasional race conflicts, but rather some other problem. When this is reached a MaxRetriesExceeded exception is raised, and the instance is set to an error state.

Valid values are positive integers (1 or greater).

  • Related options:
None
scheduler_topic = scheduler

(String) This is the message queue topic that the scheduler ‘listens’ on. It is used when the scheduler service is started up to configure the queue, and whenever an RPC call to the scheduler is made. There is almost never any reason to ever change this value.

  • Related options:
None
scheduler_tracks_instance_changes = True

(Boolean) The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host.

If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
None
scheduler_use_baremetal_filters = False

(Boolean) Set this to True to tell the nova scheduler that it should use the filters specified in the ‘baremetal_scheduler_default_filters’ option. If you are not scheduling baremetal nodes, leave this at the default setting of False.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
If this option is set to True, then the filters specified in the ‘baremetal_scheduler_default_filters’ are used instead of the filters specified in ‘scheduler_default_filters’.
scheduler_weight_classes = nova.scheduler.weights.all_weighers

(List) This is a list of weigher class names. Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative.

An instance will be scheduled to one of the N most-weighted hosts, where N is ‘scheduler_host_subset_size’.

By default, this is set to all weighers that are included with Nova. If you wish to change this, replace this with a list of strings, where each element is the path to a weigher.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
None
soft_affinity_weight_multiplier = 1.0 (Floating point) Multiplier used for weighing hosts for group soft-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-anti-affinity.
soft_anti_affinity_weight_multiplier = 1.0 (Floating point) Multiplier used for weighing hosts for group soft-anti-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-affinity.
[cells]  
ram_weight_multiplier = 10.0

(Floating point) Ram weight multiplier

Multiplier used for weighing ram. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell.

Possible values:

  • Numeric multiplier
scheduler_filter_classes = nova.cells.filters.all_filters

(List) Scheduler filter classes

Filter classes the cells scheduler should use. An entry of “nova.cells.filters.all_filters” maps to all cells filters included with nova. As of the Mitaka release the following filter classes are available:

Different cell filter: A scheduler hint of ‘different_cell’ with a value of a full cell name may be specified to route a build away from a particular cell.

Image properties filter: Image metadata named ‘hypervisor_version_requires’ with a version specification may be specified to ensure the build goes to a cell which has hypervisors of the required version. If either the version requirement on the image or the hypervisor capability of the cell is not present, this filter returns without filtering out the cells.

Target cell filter: A scheduler hint of ‘target_cell’ with a value of a full cell name may be specified to route a build to a particular cell. No error handling is done as there’s no way to know whether the full path is a valid.

As an admin user, you can also add a filter that directs builds to a particular cell.

scheduler_retries = 10

(Integer) Scheduler retries

How many retries when no cells are available. Specifies how many times the scheduler tries to launch a new instance when no cells are available.

Possible values:

  • Positive integer value

Related options:

  • This value is used with the scheduler_retry_delay value while retrying to find a suitable cell.
scheduler_retry_delay = 2

(Integer) Scheduler retry delay

Specifies the delay (in seconds) between scheduling retries when no cell can be found to place the new instance on. When the instance could not be scheduled to a cell after scheduler_retries in combination with scheduler_retry_delay, then the scheduling of the instance failed.

Possible values:

  • Time in seconds.

Related options:

  • This value is used with the scheduler_retries value while retrying to find a suitable cell.
scheduler_weight_classes = nova.cells.weights.all_weighers

(List) Scheduler weight classes

Weigher classes the cells scheduler should use. An entry of “nova.cells.weights.all_weighers” maps to all cell weighers included with nova. As of the Mitaka release the following weight classes are available:

mute_child: Downgrades the likelihood of child cells being chosen for scheduling requests, which haven’t sent capacity or capability updates in a while. Options include mute_weight_multiplier (multiplier for mute children; value should be negative).

ram_by_instance_type: Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The ram_weight_multiplier option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell.

weight_offset: Allows modifying the database to weight a particular cell. The highest weight will be the first cell to be scheduled for launching an instance. When the weight_offset of a cell is set to 0, it is unlikely to be picked but it could be picked if other cells have a lower weight, like if they’re full. And when the weight_offset is set to a very high value (for example, ‘999999999999999’), it is likely to be picked if another cell do not have a higher weight.

[metrics]  
required = True

(Boolean) This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing.

When this option is False, any metric being unavailable for a host will set the host weight to ‘weight_of_unavailable’.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
weight_of_unavailable
weight_multiplier = 1.0

(Floating point) When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows:

  • Greater than 1.0: increases the effect of the metric on overall weight.
  • Equal to 1.0: No change to the calculated weight.
  • Less than 1.0, greater than 0: reduces the effect of the metric on overall weight.
  • 0: The metric value is ignored, and the value of the ‘weight_of_unavailable’ option is returned instead.
  • Greater than -1.0, less than 0: the effect is reduced and reversed.
  • -1.0: the effect is reversed
  • Less than -1.0: the effect is increased proportionally and reversed.

Valid values are numeric, either integer or float.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
weight_of_unavailable
weight_of_unavailable = -10000.0

(Floating point) When any of the following conditions are met, this value will be used in place of any actual metric value:

  • One of the metrics named in ‘weight_setting’ is not available for a host, and the value of ‘required’ is False.
  • The ratio specified for a metric in ‘weight_setting’ is 0.
  • The ‘weight_multiplier’ option is set to 0.

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
weight_setting required weight_multiplier
weight_setting =

(List) This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more ‘name=ratio’ pairs, separated by commas, where ‘name’ is the name of the metric to be weighed, and ‘ratio’ is the relative weight for that metric.

Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the ‘weight_of_unavailable’ option.

As an example, let’s consider the case where this option is set to:

name1=1.0, name2=-1.3

The final weight will be:

(name1.value * 1.0) + (name2.value * -1.3)

This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.

  • Related options:
weight_of_unavailable

Cells

Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. It supports very large deployments.

When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker.

The nova-cells service handles communication between cells and selects cells for new instances. This service is required for every cell. Communication between cells is pluggable, and currently the only option is communication through RPC.

Cells scheduling is separate from host scheduling. nova-cells first picks a cell. Once a cell is selected and the new build request reaches its nova-cells service, it is sent over to the host scheduler in that cell and the build proceeds as it would have without cells.

Warning

Cell functionality is currently considered experimental.

Cell configuration options

Cells are disabled by default. All cell-related configuration options appear in the [cells] section in nova.conf. The following cell-related options are currently supported:

enable
Set to True to turn on cell functionality. Default is false.
name
Name of the current cell. Must be unique for each cell.
capabilities
List of arbitrary key=value pairs defining capabilities of the current cell. Values include hypervisor=xenserver;kvm,os=linux;windows.
call_timeout
How long in seconds to wait for replies from calls between cells.
scheduler_filter_classes
Filter classes that the cells scheduler should use. By default, uses nova.cells.filters.all_filters to map to all cells filters included with Compute.
scheduler_weight_classes
Weight classes that the scheduler for cells uses. By default, uses nova.cells.weights.all_weighers to map to all cells weight algorithms included with Compute.
ram_weight_multiplier
Multiplier used to weight RAM. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell. The default value is 10.0.
Configure the API (top-level) cell

The cell type must be changed in the API cell so that requests can be proxied through nova-cells down to the correct cell properly. Edit the nova.conf file in the API cell, and specify api in the cell_type key:

[DEFAULT]
compute_api_class=nova.compute.cells_api.ComputeCellsAPI
...

[cells]
cell_type= api
Configure the child cells

Edit the nova.conf file in the child cells, and specify compute in the cell_type key:

[DEFAULT]
# Disable quota checking in child cells. Let API cell do it exclusively.
quota_driver=nova.quota.NoopQuotaDriver

[cells]
cell_type = compute
Configure the database in each cell

Before bringing the services online, the database in each cell needs to be configured with information about related cells. In particular, the API cell needs to know about its immediate children, and the child cells must know about their immediate agents. The information needed is the RabbitMQ server credentials for the particular cell.

Use the nova-manage cell create command to add this information to the database in each cell:

# nova-manage cell create -h
usage: nova-manage cell create [-h] [--name <name>]
                               [--cell_type <parent|api|child|compute>]
                               [--username <username>] [--password <password>]
                               [--broker_hosts <broker_hosts>]
                               [--hostname <hostname>] [--port <number>]
                               [--virtual_host <virtual_host>]
                               [--woffset <float>] [--wscale <float>]

optional arguments:
  -h, --help            show this help message and exit
  --name <name>         Name for the new cell
  --cell_type <parent|api|child|compute>
                        Whether the cell is parent/api or child/compute
  --username <username>
                        Username for the message broker in this cell
  --password <password>
                        Password for the message broker in this cell
  --broker_hosts <broker_hosts>
                        Comma separated list of message brokers in this cell.
                        Each Broker is specified as hostname:port with both
                        mandatory. This option overrides the --hostname and
                        --port options (if provided).
  --hostname <hostname>
                        Address of the message broker in this cell
  --port <number>       Port number of the message broker in this cell
  --virtual_host <virtual_host>
                        The virtual host of the message broker in this cell
  --woffset <float>
  --wscale <float>

As an example, assume an API cell named api and a child cell named cell1.

Within the api cell, specify the following RabbitMQ server information:

rabbit_host=10.0.0.10
rabbit_port=5672
rabbit_username=api_user
rabbit_password=api_passwd
rabbit_virtual_host=api_vhost

Within the cell1 child cell, specify the following RabbitMQ server information:

rabbit_host=10.0.1.10
rabbit_port=5673
rabbit_username=cell1_user
rabbit_password=cell1_passwd
rabbit_virtual_host=cell1_vhost

You can run this in the API cell as root:

# nova-manage cell create --name cell1 --cell_type child \
  --username cell1_user --password cell1_passwd --hostname 10.0.1.10 \
  --port 5673 --virtual_host cell1_vhost --woffset 1.0 --wscale 1.0

Repeat the previous steps for all child cells.

In the child cell, run the following, as root:

# nova-manage cell create --name api --cell_type parent \
  --username api_user --password api_passwd --hostname 10.0.0.10 \
  --port 5672 --virtual_host api_vhost --woffset 1.0 --wscale 1.0

To customize the Compute cells, use the configuration option settings documented in the table Description of cell configuration options.

Cell scheduling configuration

To determine the best cell to use to launch a new instance, Compute uses a set of filters and weights defined in the /etc/nova/nova.conf file. The following options are available to prioritize cells for scheduling:

scheduler_filter_classes
List of filter classes. By default nova.cells.filters.all_filters is specified, which maps to all cells filters included with Compute (see the section called Filters).
scheduler_weight_classes

List of weight classes. By default nova.cells.weights.all_weighers is specified, which maps to all cell weight algorithms included with Compute. The following modules are available:

  • mute_child. Downgrades the likelihood of child cells being chosen for scheduling requests, which haven’t sent capacity or capability updates in a while. Options include mute_weight_multiplier (multiplier for mute children; value should be negative).
  • ram_by_instance_type. Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The ram_weight_multiplier option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell.
  • weight_offset. Allows modifying the database to weight a particular cell. You can use this when you want to disable a cell (for example, ‘0’), or to set a default cell by making its weight_offset very high (for example, ‘999999999999999’). The highest weight will be the first cell to be scheduled for launching an instance.

Additionally, the following options are available for the cell scheduler:

scheduler_retries
Specifies how many times the scheduler tries to launch a new instance when no cells are available (default=10).
scheduler_retry_delay
Specifies the delay (in seconds) between retries (default=2).

As an admin user, you can also add a filter that directs builds to a particular cell. The policy.json file must have a line with "cells_scheduler_filter:TargetCellFilter" : "is_admin:True" to let an admin user specify a scheduler hint to direct a build to a particular cell.

Optional cell configuration

Cells store all inter-cell communication data, including user names and passwords, in the database. Because the cells data is not updated very frequently, use the [cells]cells_config option to specify a JSON file to store cells data. With this configuration, the database is no longer consulted when reloading the cells data. The file must have columns present in the Cell model (excluding common database fields and the id column). You must specify the queue connection information through a transport_url field, instead of username, password, and so on. The transport_url has the following form:

rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST

The scheme can only be rabbit. The following sample shows this optional configuration:

{
    "parent": {
        "name": "parent",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": true
    },
    "cell1": {
        "name": "cell1",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit1.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": false
    },
    "cell2": {
        "name": "cell2",
        "api_url": "http://api.example.com:8774",
        "transport_url": "rabbit://rabbit2.example.com",
        "weight_offset": 0.0,
        "weight_scale": 1.0,
        "is_parent": false
    }
}

Conductor

The nova-conductor service enables OpenStack to function without compute nodes accessing the database. Conceptually, it implements a new layer on top of nova-compute. It should not be deployed on compute nodes, or else the security benefits of removing database access from nova-compute are negated. Just like other nova services such as nova-api or nova-scheduler, it can be scaled horizontally. You can run multiple instances of nova-conductor on different machines as needed for scaling purposes.

The methods exposed by nova-conductor are relatively simple methods used by nova-compute to offload its database operations. Places where nova-compute previously performed database access are now talking to nova-conductor. However, we have plans in the medium to long term to move more and more of what is currently in nova-compute up to the nova-conductor layer. The Compute service will start to look like a less intelligent slave service to nova-conductor. The conductor service will implement long running complex operations, ensuring forward progress and graceful error handling. This will be especially beneficial for operations that cross multiple compute nodes, such as migrations or resizes.

To customize the Conductor, use the configuration option settings documented in the table Description of conductor configuration options.

Compute log files

The corresponding log file of each Compute service is stored in the /var/log/nova/ directory of the host on which each service runs.

Log files used by Compute services
Log file Service name (CentOS/Fedora/openSUSE/Red Hat Enterprise Linux/SUSE Linux Enterprise) Service name (Ubuntu/Debian)
nova-api.log openstack-nova-api nova-api
nova-cert.log [1] openstack-nova-cert nova-cert
nova-compute.log openstack-nova-compute nova-compute
nova-conductor.log openstack-nova-conductor nova-conductor
nova-consoleauth.log openstack-nova-consoleauth nova-consoleauth
nova-network.log [2] openstack-nova-network nova-network
nova-manage.log nova-manage nova-manage
nova-scheduler.log openstack-nova-scheduler nova-scheduler

Footnotes

[1]The X509 certificate service (openstack-nova-cert/nova-cert) is only required by the EC2 API to the Compute service.
[2]The nova network service (openstack-nova-network/ nova-network) only runs in deployments that are not configured to use the Networking service (neutron).

Example nova.conf configuration files

The following sections describe the configuration options in the nova.conf file. You must copy the nova.conf file to each compute node. The sample nova.conf files show examples of specific configurations.

Small, private cloud

This example nova.conf file configures a small private cloud with cloud controller services, database server, and messaging server on the same server. In this case, CONTROLLER_IP represents the IP address of a central server, BRIDGE_INTERFACE represents the bridge such as br100, the NETWORK_INTERFACE represents an interface to your VLAN setup, and passwords are represented as DB_PASSWORD_COMPUTE for your Compute (nova) database password, and RABBIT PASSWORD represents the password to your message queue installation.

[DEFAULT]

# LOGS/STATE
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf

# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

# VOLUMES
# configured in cinder.conf

# COMPUTE
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini

# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True

# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.206.130
s3_host=192.168.206.130

# RABBITMQ
rabbit_host=192.168.206.130

# GLANCE
image_service=nova.image.glance.GlanceImageService

# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0

# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130

# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova

# GLANCE
[glance]
api_servers=192.168.206.130:9292

# DATABASE
[database]
connection=mysql+pymysql://nova:yourpassword@192.168.206.130/nova

# LIBVIRT
[libvirt]
virt_type=qemu
KVM, Flat, MySQL, and Glance, OpenStack or EC2 API

This example nova.conf file, from an internal Rackspace test system, is used for demonstrations.

[DEFAULT]

# LOGS/STATE
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf

# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

# VOLUMES
# configured in cinder.conf

# COMPUTE
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini

# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True

# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.206.130
s3_host=192.168.206.130

# RABBITMQ
rabbit_host=192.168.206.130

# GLANCE
image_service=nova.image.glance.GlanceImageService

# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0

# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130

# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova

# GLANCE
[glance]
api_servers=192.168.206.130:9292

# DATABASE
[database]
connection=mysql+pymysql://nova:yourpassword@192.168.206.130/nova

# LIBVIRT
[libvirt]
virt_type=qemu
KVM, Flat, MySQL, and Glance, OpenStack or EC2 API
XenServer, Flat networking, MySQL, and Glance, OpenStack API

This example nova.conf file is from an internal Rackspace test system.

verbose
nodaemon
network_manager=nova.network.manager.FlatManager
image_service=nova.image.glance.GlanceImageService
flat_network_bridge=xenbr0
compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://&lt;XenServer IP&gt;
xenapi_connection_username=root
xenapi_connection_password=supersecret
xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
rescue_timeout=86400
use_ipv6=true

# To enable flat_injected, currently only works on Debian-based systems
flat_injected=true
ipv6_backend=account_identifier
ca_path=./nova/CA

# Add the following to your conf file if you're running on Ubuntu Maverick
xenapi_remap_vbd_dev=true
[database]
connection=mysql+pymysql://root:&lt;password&gt;@127.0.0.1/nova
XenServer, Flat networking, MySQL, and Glance, OpenStack API

Compute service sample configuration files

Files in this section can be found in /etc/nova.

api-paste.ini

The Compute service stores its API configuration settings in the api-paste.ini file.

############
# Metadata #
############
[composite:metadata]
use = egg:Paste#urlmap
/: meta

[pipeline:meta]
pipeline = cors metaapp

[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory

#############
# OpenStack #
#############

[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
# v21 is an exactly feature match for v2, except it has more stringent
# input validation on the wsgi surface (prevents fuzzing early on the
# API). It also provides new features via API microversions which are
# opt into for clients. Unaware clients will receive the same frozen
# v2 API feature set, but with some relaxed validation
/v2: openstack_compute_api_v21_legacy_v2_compatible
/v2.1: openstack_compute_api_v21

[composite:openstack_compute_api_v21]
use = call:nova.api.auth:pipeline_factory_v21
noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit noauth2 osapi_compute_app_v21
keystone = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v21

[composite:openstack_compute_api_v21_legacy_v2_compatible]
use = call:nova.api.auth:pipeline_factory_v21
noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit noauth2 legacy_v2_compatible osapi_compute_app_v21
keystone = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit authtoken keystonecontext legacy_v2_compatible osapi_compute_app_v21

[filter:request_id]
paste.filter_factory = oslo_middleware:RequestId.factory

[filter:compute_req_id]
paste.filter_factory = nova.api.compute_req_id:ComputeReqIdMiddleware.factory

[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory

[filter:noauth2]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory

[filter:sizelimit]
paste.filter_factory = oslo_middleware:RequestBodySizeLimiter.factory

[filter:http_proxy_to_wsgi]
paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory

[filter:legacy_v2_compatible]
paste.filter_factory = nova.api.openstack:LegacyV2CompatibleWrapper.factory

[app:osapi_compute_app_v21]
paste.app_factory = nova.api.openstack.compute:APIRouterV21.factory

[pipeline:oscomputeversions]
pipeline = faultwrap http_proxy_to_wsgi oscomputeversionapp

[app:oscomputeversionapp]
paste.app_factory = nova.api.openstack.compute.versions:Versions.factory

##########
# Shared #
##########

[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = nova

[filter:keystonecontext]
paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
policy.yaml

The policy.yaml file defines additional access controls that apply to the Compute service.

#
"os_compute_api:os-admin-actions:discoverable": "@"
#
"os_compute_api:os-admin-actions:reset_state": "rule:admin_api"
#
"os_compute_api:os-admin-actions:inject_network_info": "rule:admin_api"
#
"os_compute_api:os-admin-actions": "rule:admin_api"
#
"os_compute_api:os-admin-actions:reset_network": "rule:admin_api"
#
"os_compute_api:os-admin-password:discoverable": "@"
#
"os_compute_api:os-admin-password": "rule:admin_or_owner"
#
"os_compute_api:os-agents": "rule:admin_api"
#
"os_compute_api:os-agents:discoverable": "@"
#
"os_compute_api:os-aggregates:set_metadata": "rule:admin_api"
#
"os_compute_api:os-aggregates:add_host": "rule:admin_api"
#
"os_compute_api:os-aggregates:discoverable": "@"
#
"os_compute_api:os-aggregates:create": "rule:admin_api"
#
"os_compute_api:os-aggregates:remove_host": "rule:admin_api"
#
"os_compute_api:os-aggregates:update": "rule:admin_api"
#
"os_compute_api:os-aggregates:index": "rule:admin_api"
#
"os_compute_api:os-aggregates:delete": "rule:admin_api"
#
"os_compute_api:os-aggregates:show": "rule:admin_api"
#
"os_compute_api:os-assisted-volume-snapshots:create": "rule:admin_api"
#
"os_compute_api:os-assisted-volume-snapshots:delete": "rule:admin_api"
#
"os_compute_api:os-assisted-volume-snapshots:discoverable": "@"
#
"os_compute_api:os-attach-interfaces": "rule:admin_or_owner"
#
"os_compute_api:os-attach-interfaces:discoverable": "@"
# Controls who can attach an interface to an instance
"os_compute_api:os-attach-interfaces:create": "rule:admin_or_owner"
# Controls who can detach an interface from an instance
"os_compute_api:os-attach-interfaces:delete": "rule:admin_or_owner"
#
"os_compute_api:os-availability-zone:list": "rule:admin_or_owner"
#
"os_compute_api:os-availability-zone:discoverable": "@"
#
"os_compute_api:os-availability-zone:detail": "rule:admin_api"
#
"os_compute_api:os-baremetal-nodes:discoverable": "@"
#
"os_compute_api:os-baremetal-nodes": "rule:admin_api"
#
"context_is_admin": "role:admin"
#
"admin_or_owner": "is_admin:True or project_id:%(project_id)s"
#
"admin_api": "is_admin:True"
#
"network:attach_external_network": "is_admin:True"
#
"os_compute_api:os-block-device-mapping:discoverable": "@"
#
"os_compute_api:os-block-device-mapping-v1:discoverable": "@"
#
"os_compute_api:os-cells:discoverable": "@"
#
"os_compute_api:os-cells:update": "rule:admin_api"
#
"os_compute_api:os-cells:create": "rule:admin_api"
#
"os_compute_api:os-cells": "rule:admin_api"
#
"os_compute_api:os-cells:sync_instances": "rule:admin_api"
#
"os_compute_api:os-cells:delete": "rule:admin_api"
#
"cells_scheduler_filter:DifferentCellFilter": "is_admin:True"
#
"cells_scheduler_filter:TargetCellFilter": "is_admin:True"
#
"os_compute_api:os-certificates:discoverable": "@"
#
"os_compute_api:os-certificates:create": "rule:admin_or_owner"
#
"os_compute_api:os-certificates:show": "rule:admin_or_owner"
#
"os_compute_api:os-cloudpipe": "rule:admin_api"
#
"os_compute_api:os-cloudpipe:discoverable": "@"
#
"os_compute_api:os-config-drive:discoverable": "@"
#
"os_compute_api:os-config-drive": "rule:admin_or_owner"
#
"os_compute_api:os-console-auth-tokens:discoverable": "@"
#
"os_compute_api:os-console-auth-tokens": "rule:admin_api"
#
"os_compute_api:os-console-output:discoverable": "@"
#
"os_compute_api:os-console-output": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:create": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:show": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:delete": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:discoverable": "@"
#
"os_compute_api:os-consoles:index": "rule:admin_or_owner"
#
"os_compute_api:os-create-backup:discoverable": "@"
#
"os_compute_api:os-create-backup": "rule:admin_or_owner"
#
"os_compute_api:os-deferred-delete:discoverable": "@"
#
"os_compute_api:os-deferred-delete": "rule:admin_or_owner"
#
"os_compute_api:os-evacuate:discoverable": "@"
#
"os_compute_api:os-evacuate": "rule:admin_api"
#
"os_compute_api:os-extended-availability-zone": "rule:admin_or_owner"
#
"os_compute_api:os-extended-availability-zone:discoverable": "@"
#
"os_compute_api:os-extended-server-attributes": "rule:admin_api"
#
"os_compute_api:os-extended-server-attributes:discoverable": "@"
#
"os_compute_api:os-extended-status:discoverable": "@"
#
"os_compute_api:os-extended-status": "rule:admin_or_owner"
#
"os_compute_api:os-extended-volumes": "rule:admin_or_owner"
#
"os_compute_api:os-extended-volumes:discoverable": "@"
#
"os_compute_api:extension_info:discoverable": "@"
#
"os_compute_api:extensions": "rule:admin_or_owner"
#
"os_compute_api:extensions:discoverable": "@"
#
"os_compute_api:os-fixed-ips:discoverable": "@"
#
"os_compute_api:os-fixed-ips": "rule:admin_api"
#
"os_compute_api:os-flavor-access:add_tenant_access": "rule:admin_api"
#
"os_compute_api:os-flavor-access:discoverable": "@"
#
"os_compute_api:os-flavor-access:remove_tenant_access": "rule:admin_api"
#
"os_compute_api:os-flavor-access": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-extra-specs:show": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-extra-specs:create": "rule:admin_api"
#
"os_compute_api:os-flavor-extra-specs:discoverable": "@"
#
"os_compute_api:os-flavor-extra-specs:update": "rule:admin_api"
#
"os_compute_api:os-flavor-extra-specs:delete": "rule:admin_api"
#
"os_compute_api:os-flavor-extra-specs:index": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-manage": "rule:admin_api"
#
"os_compute_api:os-flavor-manage:discoverable": "@"
#
"os_compute_api:os-flavor-rxtx": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-rxtx:discoverable": "@"
#
"os_compute_api:flavors:discoverable": "@"
#
"os_compute_api:flavors": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ip-dns": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ip-dns:domain:update": "rule:admin_api"
#
"os_compute_api:os-floating-ip-dns:discoverable": "@"
#
"os_compute_api:os-floating-ip-dns:domain:delete": "rule:admin_api"
#
"os_compute_api:os-floating-ip-pools:discoverable": "@"
#
"os_compute_api:os-floating-ip-pools": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ips": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ips:discoverable": "@"
#
"os_compute_api:os-floating-ips-bulk:discoverable": "@"
#
"os_compute_api:os-floating-ips-bulk": "rule:admin_api"
#
"os_compute_api:os-fping:all_tenants": "rule:admin_api"
#
"os_compute_api:os-fping:discoverable": "@"
#
"os_compute_api:os-fping": "rule:admin_or_owner"
#
"os_compute_api:os-hide-server-addresses:discoverable": "@"
#
"os_compute_api:os-hide-server-addresses": "is_admin:False"
#
"os_compute_api:os-hosts:discoverable": "@"
#
"os_compute_api:os-hosts": "rule:admin_api"
#
"os_compute_api:os-hypervisors:discoverable": "@"
#
"os_compute_api:os-hypervisors": "rule:admin_api"
#
"os_compute_api:image-metadata:discoverable": "@"
#
"os_compute_api:image-size:discoverable": "@"
#
"os_compute_api:image-size": "rule:admin_or_owner"
#
"os_compute_api:images:discoverable": "@"
#
"os_compute_api:os-instance-actions:events": "rule:admin_api"
#
"os_compute_api:os-instance-actions": "rule:admin_or_owner"
#
"os_compute_api:os-instance-actions:discoverable": "@"
#
"os_compute_api:os-instance-usage-audit-log": "rule:admin_api"
#
"os_compute_api:os-instance-usage-audit-log:discoverable": "@"
#
"os_compute_api:ips:discoverable": "@"
#
"os_compute_api:ips:show": "rule:admin_or_owner"
#
"os_compute_api:ips:index": "rule:admin_or_owner"
#
"os_compute_api:os-keypairs:discoverable": "@"
#
"os_compute_api:os-keypairs:index": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs:create": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs:delete": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs:show": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs": "rule:admin_or_owner"
#
"os_compute_api:limits:discoverable": "@"
#
"os_compute_api:limits": "rule:admin_or_owner"
#
"os_compute_api:os-lock-server:discoverable": "@"
#
"os_compute_api:os-lock-server:lock": "rule:admin_or_owner"
#
"os_compute_api:os-lock-server:unlock:unlock_override": "rule:admin_api"
#
"os_compute_api:os-lock-server:unlock": "rule:admin_or_owner"
#
"os_compute_api:os-migrate-server:migrate": "rule:admin_api"
#
"os_compute_api:os-migrate-server:discoverable": "@"
#
"os_compute_api:os-migrate-server:migrate_live": "rule:admin_api"
#
"os_compute_api:os-migrations:index": "rule:admin_api"
#
"os_compute_api:os-migrations:discoverable": "@"
#
"os_compute_api:os-multinic": "rule:admin_or_owner"
#
"os_compute_api:os-multinic:discoverable": "@"
#
"os_compute_api:os-multiple-create:discoverable": "@"
#
"os_compute_api:os-networks:discoverable": "@"
#
"os_compute_api:os-networks": "rule:admin_api"
#
"os_compute_api:os-networks:view": "rule:admin_or_owner"
#
"os_compute_api:os-networks-associate": "rule:admin_api"
#
"os_compute_api:os-networks-associate:discoverable": "@"
#
"os_compute_api:os-pause-server:unpause": "rule:admin_or_owner"
#
"os_compute_api:os-pause-server:discoverable": "@"
#
"os_compute_api:os-pause-server:pause": "rule:admin_or_owner"
#
"os_compute_api:os-pci:index": "rule:admin_api"
#
"os_compute_api:os-pci:detail": "rule:admin_api"
#
"os_compute_api:os-pci:pci_servers": "rule:admin_or_owner"
#
"os_compute_api:os-pci:show": "rule:admin_api"
#
"os_compute_api:os-pci:discoverable": "@"
#
"os_compute_api:os-quota-class-sets:show": "is_admin:True or quota_class:%(quota_class)s"
#
"os_compute_api:os-quota-class-sets:discoverable": "@"
#
"os_compute_api:os-quota-class-sets:update": "rule:admin_api"
#
"os_compute_api:os-quota-sets:update": "rule:admin_api"
#
"os_compute_api:os-quota-sets:defaults": "@"
#
"os_compute_api:os-quota-sets:show": "rule:admin_or_owner"
#
"os_compute_api:os-quota-sets:delete": "rule:admin_api"
#
"os_compute_api:os-quota-sets:discoverable": "@"
#
"os_compute_api:os-quota-sets:detail": "rule:admin_api"
#
"os_compute_api:os-remote-consoles": "rule:admin_or_owner"
#
"os_compute_api:os-remote-consoles:discoverable": "@"
#
"os_compute_api:os-rescue:discoverable": "@"
#
"os_compute_api:os-rescue": "rule:admin_or_owner"
#
"os_compute_api:os-scheduler-hints:discoverable": "@"
#
"os_compute_api:os-security-group-default-rules:discoverable": "@"
#
"os_compute_api:os-security-group-default-rules": "rule:admin_api"
#
"os_compute_api:os-security-groups": "rule:admin_or_owner"
#
"os_compute_api:os-security-groups:discoverable": "@"
#
"os_compute_api:os-server-diagnostics": "rule:admin_api"
#
"os_compute_api:os-server-diagnostics:discoverable": "@"
#
"os_compute_api:os-server-external-events:create": "rule:admin_api"
#
"os_compute_api:os-server-external-events:discoverable": "@"
#
"os_compute_api:os-server-groups:discoverable": "@"
#
"os_compute_api:os-server-groups": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:index": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:show": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:create": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:discoverable": "@"
#
"os_compute_api:server-metadata:update_all": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:delete": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:update": "rule:admin_or_owner"
#
"os_compute_api:os-server-password": "rule:admin_or_owner"
#
"os_compute_api:os-server-password:discoverable": "@"
#
"os_compute_api:os-server-tags:delete_all": "@"
#
"os_compute_api:os-server-tags:index": "@"
#
"os_compute_api:os-server-tags:update_all": "@"
#
"os_compute_api:os-server-tags:delete": "@"
#
"os_compute_api:os-server-tags:update": "@"
#
"os_compute_api:os-server-tags:show": "@"
#
"os_compute_api:os-server-tags:discoverable": "@"
#
"os_compute_api:os-server-usage": "rule:admin_or_owner"
#
"os_compute_api:os-server-usage:discoverable": "@"
#
"os_compute_api:servers:index": "rule:admin_or_owner"
#
"os_compute_api:servers:detail": "rule:admin_or_owner"
#
"os_compute_api:servers:detail:get_all_tenants": "rule:admin_api"
#
"os_compute_api:servers:index:get_all_tenants": "rule:admin_api"
#
"os_compute_api:servers:show": "rule:admin_or_owner"
#
"os_compute_api:servers:show:host_status": "rule:admin_api"
#
"os_compute_api:servers:create": "rule:admin_or_owner"
#
"os_compute_api:servers:create:forced_host": "rule:admin_api"
#
"os_compute_api:servers:create:attach_volume": "rule:admin_or_owner"
#
"os_compute_api:servers:create:attach_network": "rule:admin_or_owner"
#
"os_compute_api:servers:delete": "rule:admin_or_owner"
#
"os_compute_api:servers:update": "rule:admin_or_owner"
#
"os_compute_api:servers:confirm_resize": "rule:admin_or_owner"
#
"os_compute_api:servers:revert_resize": "rule:admin_or_owner"
#
"os_compute_api:servers:reboot": "rule:admin_or_owner"
#
"os_compute_api:servers:resize": "rule:admin_or_owner"
#
"os_compute_api:servers:rebuild": "rule:admin_or_owner"
#
"os_compute_api:servers:create_image": "rule:admin_or_owner"
#
"os_compute_api:servers:create_image:allow_volume_backed": "rule:admin_or_owner"
#
"os_compute_api:servers:start": "rule:admin_or_owner"
#
"os_compute_api:servers:stop": "rule:admin_or_owner"
#
"os_compute_api:servers:trigger_crash_dump": "rule:admin_or_owner"
#
"os_compute_api:servers:discoverable": "@"
#
"os_compute_api:servers:migrations:show": "rule:admin_api"
#
"os_compute_api:servers:migrations:force_complete": "rule:admin_api"
#
"os_compute_api:servers:migrations:delete": "rule:admin_api"
#
"os_compute_api:servers:migrations:index": "rule:admin_api"
#
"os_compute_api:server-migrations:discoverable": "@"
#
"os_compute_api:os-services": "rule:admin_api"
#
"os_compute_api:os-services:discoverable": "@"
#
"os_compute_api:os-shelve:shelve": "rule:admin_or_owner"
#
"os_compute_api:os-shelve:unshelve": "rule:admin_or_owner"
#
"os_compute_api:os-shelve:shelve_offload": "rule:admin_api"
#
"os_compute_api:os-shelve:discoverable": "@"
#
"os_compute_api:os-simple-tenant-usage:show": "rule:admin_or_owner"
#
"os_compute_api:os-simple-tenant-usage:list": "rule:admin_api"
#
"os_compute_api:os-simple-tenant-usage:discoverable": "@"
#
"os_compute_api:os-suspend-server:resume": "rule:admin_or_owner"
#
"os_compute_api:os-suspend-server:suspend": "rule:admin_or_owner"
#
"os_compute_api:os-suspend-server:discoverable": "@"
#
"os_compute_api:os-tenant-networks": "rule:admin_or_owner"
#
"os_compute_api:os-tenant-networks:discoverable": "@"
#
"os_compute_api:os-used-limits:discoverable": "@"
#
"os_compute_api:os-used-limits": "rule:admin_api"
#
"os_compute_api:os-user-data:discoverable": "@"
#
"os_compute_api:versions:discoverable": "@"
#
"os_compute_api:os-virtual-interfaces:discoverable": "@"
#
"os_compute_api:os-virtual-interfaces": "rule:admin_or_owner"
#
"os_compute_api:os-volumes:discoverable": "@"
#
"os_compute_api:os-volumes": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:index": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:create": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:show": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:discoverable": "@"
#
"os_compute_api:os-volumes-attachments:update": "rule:admin_api"
#
"os_compute_api:os-volumes-attachments:delete": "rule:admin_or_owner"
rootwrap.conf

The rootwrap.conf file defines configuration values used by the rootwrap script when the Compute service needs to escalate its privileges to those of the root user.

It is also possible to disable the root wrapper, and default to sudo only. Configure the disable_rootwrap option in the [workaround] section of the nova.conf configuration file.

# Configuration for nova-rootwrap
# This file should be owned by (and only-writeable by) the root user

[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap

# List of directories to search executables in, in case filters do not
# explicitly specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/sbin,/usr/local/bin

# Enable logging to syslog
# Default value is False
use_syslog=False

# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog

# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Dashboard

Dashboard configuration options

The following options are available to configure and customize the behavior of your Dashboard installation.

Dashboard settings

The following options are included in the HORIZON_CONFIG dictionary.

Note

Dashboards are automatically discovered in two ways:

  1. By adding a configuration file to the openstack_dashboard/local/enabled directory. This is the default way.
  2. By traversing Django’s list of INSTALLED_APPS and importing any files that have the name dashboard.py and include code to register themselves as a Dashboard.

Warning

In Dashboard configuration, we suggest that you do not use the dashboards and default_dashboard settings. If you plan on having more than one dashboard, please specify their order using the Pluggable settings.

Description of standard Dashboard configuration options
Configuration option = Default value Description
ajax_queue_limit = 10 The maximum number of simultaneous AJAX connections the dashboard may try to make.
ajax_poll_interval = 2500 How frequently resources in transition states should be polled for updates. Expressed in milliseconds.
angular_modules = [] A list of AngularJS modules to be loaded when Angular bootstraps.
auto_fade_alerts = {'delay': [3000], 'fade_duration': [1500], 'types': []} If provided, will auto-fade the alert types specified. Valid alert types include alert-default, alert-success, alert-info, alert-warning, alert-danger. Can also define the delay before the alert fades and the fade out duration.
bug_url = None Displays a “Report Bug” link in the site header which links to the value of this setting, ideally a URL containing information on how to report issues.
dashboards = None If a list of dashboard slugs is provided in this setting, the supplied ordering is applied to the list of discovered dashboards.
default_dashboard = None The slug of the dashboard which should act as the fallback dashboard whenever a user logs in or is otherwise redirected to an ambiguous location.
disable_password_reveal = False Setting this to True will disable the reveal button for password fields, including on the login form.
exceptions = {'unauthorized': [], 'not_found': [], 'recoverable': []} Classes of exceptions which the Dashboard’s centralized exception handling should be aware of.
help_url = None Displays a “Help” link in the site header which links to the value of this setting, ideally a URL containing help information.
js_files = [] A list of javascript source files to be included in the compressed set of files that are loaded on every page.
js_spec_files = [] A list of JavaScript spec files to include for integration with the Jasmine spec runner.
modal_backdrop = static Controls how bootstrap backdrop element outside of modals looks and feels. Valid values are true, false and static.
password_autocomplete = off Controls whether browser autocompletion should be enabled on the login form. Valid values are on and off.
password_validator = {'regex': '.*', 'help_text': _("Password is not accepted")} A dictionary, containing a regular expression used for password validation and help text, which will be displayed if the password does not pass validation. The help text should describe the password requirements if there are any.
simple_ip_management = True Enable or disable simplified floating IP address management.
user_home = settings.LOGIN_REDIRECT_URL Either a literal URL path, such as the default, or Python’s dotted string notation representing a function which evaluates the URL the user should be redirected to based on the attributes of the user.
Django settings

The following table shows a few key Django settings you should be aware of for the most basic of deployments.

Warning

This is not meant to be anywhere near a complete list of settings for Django. You should always consult the main Django documentation, especially with regards to deployment considerations and security best-practices.

Description of the Dashboard’s Django configuration options
Configuration option = Default value Description
ALLOWED_HOSTS = ['localhost'] List of names or IP addresses of the hosts running the dashboard.
DEBUG and TEMPLATE_DEBUG = True Controls whether unhandled exceptions should generate a generic 500 response or present the user with a pretty-formatted debug information page.
SECRET_KEY A unique and secret value for your deployment. Unless you are running a load-balancer with multiple Dashboard installations behind it, each Dashboard instance should have a unique secret key.
SECURE_PROXY_SSL_HEADER, CSRF_COOKIE_SECURE and SESSION_COOKIE_SECURE These three should be configured if you are deploying the Dashboard with SSL. The values indicated in the default openstack_dashboard/local/local_settings.py.example file are generally safe to use. When CSRF_COOKIE_SECURE or SESSION_COOKIE_SECURE are set to True, these attributes help protect the session cookies from cross-site scripting.
ADD_INSTALLED_APPS A list of Django applications to be prepended to the INSTALLED_APPS setting. Allows extending the list of installed applications without having to override it completely.
OpenStack settings (partial)

The following settings inform the Dashboard of information about the other OpenStack projects which are part of the same cloud and control the behavior of specific dashboards, panels, API calls, and so on.

Description of the Dashboard’s OpenStack configuration options
Configuration option = Default Description
AUTHENTICATION_URLS = ['openstack_auth.urls'] A list of modules from which to collate authentication URLs from.
API_RESULT_LIMIT = 1000 The maximum number of objects, for example Glance images to display on a single page before providing a paging element to paginate the results.
API_RESULT_PAGE_SIZE = 20 Similar to API_RESULT_LIMIT. This setting controls the number of items to be shown per page if API pagination support for this exists.
AVAILABLE_REGIONS = None A list of tuples which defines multiple regions.
AVAILABLE_THEMES = [ ('default', 'Default', 'themes/default'), ('material', 'Material', 'themes/material') ] Configure this setting to tell horizon which theme to use. Horizon contains two pre-configured themes. These themes are 'default' and 'material'. Horizon uses three tuples in a list to define multiple themes. The tuple format is ('{{ theme_name }}', '{{ theme_label }}', '{{ theme_path }}'). Configure theme_name to define the directory that customized themes are collected into. The theme-label is a user-facing label shown in the theme picker. Horizon uses theme path as the static root of the theme. If you want to include content other than static files in a theme directory, but do not wish the content served up at /{{ THEME_COLLECTION_DIR }}/{{ theme_name }}, create a subdirectory named static. If your theme folder contains a subdirectory named static, then horizon uses static/custom/static as the root for content served at /static/custom. The static root of the theme folder must always contain a _variables.scss file and a _styles.scss file. These two files must contain or import all the styles, bootstrap, and horizon-specific variables used in the GUI.
CONSOLE_TYPE = AUTO The type of in-browser console used to access the virtual machines. Valid values are AUTO, VNC, SPICE, RDP, SERIAL, and None. None deactivates the in-browser console and is available in Juno. SERIAL is available since Kilo.
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 The size of the chunk, in bytes, for downloading objects from the Object Storage service.
INSTANCE_LOG_LENGTH = 35 The number of lines displayed for the log of an instance. Valid value must be a positive integer.
CREATE_INSTANCE_FLAVOR_SORT = {'key':'ram'} When launching a new instance the default flavor is sorted by RAM usage in ascending order. You can customize the sort order by id, name, ram, disk and vcpus. You can also insert any custom callback function and also provide a flag for reverse sort.
DEFAULT_THEME = default This setting configures which theme horizon uses if a theme has not yet been selected in the theme picker. This also sets the cookie value. This value represents the theme_name key used when there are multiple themes available. Configure this setting inside AVAILABLE_THEMES to make use of this theme.
DROPDOWN_MAX_ITEMS = 30 The maximum number of items displayed in a dropdown.
ENFORCE_PASSWORD_CHECK = False Displays an Admin Password field on the ‘Change Password’ form to verify that it is indeed the admin logged-in who wants to change the password.
IMAGES_LIST_FILTER_TENANTS = None A list of dictionaries to add optional categories to the image fixed filters in the Images panel, based on project ownership.
IMAGE_RESERVED_CUSTOM_PROPERTIES = [] A list of image custom property keys that should not be displayed in the Update Metadata tree.
LAUNCH_INSTANCE_DEFAULTS = {"config_drive": False} A dictionary of settings which can be used to provide the default values for properties found in the Launch Instance modal.
MESSAGES_PATH = None The absolute path to the directory where message files are collected.
OPENSTACK_API_VERSIONS = {"data-processing": 1.1, "identity": 2.0, "volume": 2, "compute": 2} Use this setting to force the dashboard to use a specific API version for a given service API.
OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False Enables or disables the instance action ‘Retrieve password’ allowing password retrieval from metadata service.
OPENSTACK_ENDPOINT_TYPE = "publicURL" A string specifying the endpoint type to use for the endpoints in the Identity service catalog.
OPENSTACK_HOST = "127.0.0.1" The hostname of the Identity service server used for authentication if you only have one region. This is often the only setting that needs to be set for a basic deployment.
OPENSTACK_HYPERVISOR_FEATURES = {'can_set_mount_point': False, 'can_set_password': False, 'requires_keypair': False,} A dictionary of settings identifying the capabilities of the hypervisor of Compute service.
OPENSTACK_IMAGE_BACKEND = {'image_formats': [ ('', _('Select format')), ('aki', _('AKI - Amazon Kernel Image')), ('ami', _('AMI - Amazon Machine Image')), ('ari', _('ARI - Amazon Ramdisk Image')), ('docker', _('Docker')), ('iso', _('ISO - Optical Disk Image')), ('qcow2', _('QCOW2 - QEMU Emulator')), ('raw', _('Raw')), ('vdi', _('VDI')), ('vhd', _('VHD')), ('vmdk', _('VMDK'))]} Customizes features related to the Image service, such as the list of supported image formats.
IMAGE_CUSTOM_PROPERTY_TITLES = { "architecture": _("Architecture"), "kernel_id": _("Kernel ID"), "ramdisk_id": _("Ramdisk ID"), "image_state": _("Euca2ools state"), "project_id": _("Project ID"), "image_type": _("Image Type")} Customizes the titles for image custom property attributes that appear on image detail pages.
HORIZON_IMAGES_ALLOW_UPLOAD = True Enables/Disables local uploads to prevent filling up the disk on the dashboard server.
OPENSTACK_KEYSTONE_BACKEND = {'name': 'native', 'can_edit_user': True, 'can_edit_project': True} A dictionary of settings identifying the capabilities of the auth backend for the Identity service.
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" Overrides the default domain used when running on a single-domain model with version 3 of the Identity service. All entities will be created in the default domain.
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" The role to be assigned to a user when they are added to a project. The value must correspond to an existing role name in the Identity service. In general, the value should match the member_role_name defined in keystone.conf.
OPENSTACK_KEYSTONE_ADMIN_ROLES = ["admin"] The list of roles that have administrator privileges in the OpenStack installation. This check is very basic and essentially only works with versions 2 and 3 of the Identity service with the default policy file.
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False When enabled, a user will be required to enter the Domain name in addition to username for login. Enabled if running on a multi-domain model.
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST The full URL for the Identity service endpoint used for authentication.
OPENSTACK_KEYSTONE_FEDERATION_MANAGEMENT = False Enables/Disables panels that provide the ability for users to manage Identity Providers (IdPs) and establish a set of rules to map federation protocol attributes to Identity API attributes. Requires version 3 and later of the Identity API.
WEBSSO_ENABLED = False Enables/Disables Identity service web single-sign-on. Requires Identity service version 3and Django OpenStack Auth version 1.2.0 or later.
WEBSSO_INITIAL_CHOICE = "credentials" Determines the default authentication mechanism. When a user lands on the login page, this is the first choice they will see.
WEBSSO_CHOICES = ( ("credentials", _("Keystone Credentials")), ("oidc", _("OpenID Connect")), ("saml2", _("Security Assertion Markup Language"))) List of authentication mechanisms available to the user.
WEBSSO_IDP_MAPPING = {} A dictionary of specific identity provider and federation protocol combinations.
OPENSTACK_CINDER_FEATURES = {'enable_backup': False} A dictionary of settings which can be used to enable optional services provided by the Block storage service. Currently, only the backup service is available.
OPENSTACK_HEAT_STACK = {'enable_user_pass': True} A dictionary of settings to use with heat stacks. Currently, the only setting available is enable_user_pass, which can be used to disable the password field while launching the stack.
OPENSTACK_NEUTRON_NETWORK = { 'enable_router': True, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': True, 'enable_quotas': False, 'enable_firewall': True, 'enable_vpn': True, 'profile_support': None, 'supported_provider_types': ["*"], 'supported_vnic_types': ["*"], 'segmentation_id_range': {}, 'enable_fip_topology_check': True, 'default_ipv4_subnet_pool_label': None, 'default_ipv6_subnet_pool_label': None,} A dictionary of settings which can be used to enable optional services provided by the Networking service and configure specific features.
OPENSTACK_SSL_CACERT = None The CA certificate to be used for SSL verification. When set to None, the default certificate on the system is used.
OPENSTACK_SSL_NO_VERIFY = False Enable/Disable SSL certificate checks in the OpenStack clients. Useful for self-signed certificates.
OPENSTACK_TOKEN_HASH_ALGORITHM = "md5" The hash algorithm to use for authentication tokens.
OPENSTACK_TOKEN_HASH_ENABLED = True Hashing tokens from the Identity service keep the Dashboard session data smaller, but it doesn’t work in some cases when using PKI tokens. Uncomment this value and set it to False if using PKI tokens and there are 401 errors due to token hashing.
POLICY_FILES = {'identity': 'keystone_policy.json', 'compute': 'nova_policy.json'} The mapping of the contents of POLICY_FILES_PATH to service types. When policy.json files are added to POLICY_FILES_PATH, they should be included here too.
POLICY_FILES_PATH = os.path.join(ROOT_PATH, "conf") Where service based policy files are located.
SESSION_TIMEOUT = 3600 A method to supersede the token timeout with a shorter dashboard session timeout in seconds. For example, if your token expires in 60 minutes, a value of 1800 will log users out after 30 minutes.
SAHARA_AUTO_IP_ALLOCATION_ENABLED = False Notifies the Data processing system whether or not automatic IP allocation is enabled. Set to True if you are running Compute Networking with auto_assign_floating_ip = True.
TROVE_ADD_USER_PERMS and TROVE_ADD_DATABASE_PERMS = [] Database service user and database extension support.
WEBROOT = / The location where the access to the dashboard is configured in the web server.
STATIC_ROOT = /static/ URL pointing to files in STATIC_ROOT. the value must end in "/".
THEME_COLLECTION_DIR = themes Horizon collects the available themes into a static directory based on this variable setting. For example, the default theme is accessible from /{{ STATIC_URL }}/themes/default.
THEME_COOKIE_NAME = themes This setting determines which cookie key horizon sets to store the current theme. Cookie keys expire after one year elapses.
DISALLOW_IFRAME_EMBED = True This setting can be used to defend against Clickjacking and prevent the Dashboard from being embedded within an iframe.
OPENSTACK_NOVA_EXTENSIONS_BLACKLIST = [] Ignore all listed Compute service extensions, and behave as if they were unsupported. Can be used to selectively disable certain costly extensions for performance reasons.
Pluggable settings

The following keys can be used in any pluggable settings file.

Description of the Dashboard’s pluggable configuration options
Configuration option Description
ADD_EXCEPTIONS A dictionary of exception classes to be added to HORIZON['exceptions'].
ADD_INSTALLED_APPS A list of applications to be prepended to INSTALLED_APPS. This is needed to expose static files from a plugin.
ADD_ANGULAR_MODULES A list of AngularJS modules to be loaded when Angular bootstraps.
ADD_JS_FILES A list of javascript source files to be included in the compressed set of files that are loaded on every page.
ADD_JS_SPEC_FILES A list of javascript spec files to include for integration with the Jasmine spec runner.
ADD_SCSS_FILES A list of SCSS files to be included in the compressed set of files that are loaded on every page.
AUTO_DISCOVER_STATIC_FILES If set to True, JavaScript files and static angular HTML template files will be automatically discovered from the static folder in each apps listed in ADD_INSTALLED_APPS.
DISABLED If set to True, this settings file will not be added to the settings.
UPDATE_HORIZON_CONFIG A dictionary of values that will replace the values in HORIZON_CONFIG.
Pluggable settings for dashboards

The following keys are specific to register a dashboard.

Description of the pluggable dashboards configuration options
Configuration option Description
DASHBOARD Required. The slug of the dashboard to be added to HORIZON['dashboards'].
DEFAULT If set to True, this dashboard will be set as the default dashboard.
Pluggable settings for panels

The following keys are specific to register or remove a panel.

Description of the pluggable panels configuration options
Configuration option Description
PANEL Required. The slug of the panel to be added to HORIZON_CONFIG.
PANEL_DASHBOARD Required. The slug of the dashboard the PANEL is associated with.
PANEL_GROUP The slug of the panel group the PANEL is associated with. If you want the panel to show up without a panel group, use the panel group default.
DEFAULT_PANEL If set, it will update the default panel of the PANEL_DASHBOARD.
ADD_PANEL Python panel class of the PANEL to be added.
REMOVE_PANEL If set to True, the PANEL will be removed from PANEL_DASHBOARD/PANEL_GROUP.
Pluggable settings for panel groups

The following keys are specific to register a panel group.

Description of the pluggable panel groups configuration options
Configuration option Description
PANEL_GROUP Required. The slug of the panel group to be added to HORIZON_CONFIG.
PANEL_GROUP_NAME Required. The display name of the PANEL_GROUP.
PANEL_GROUP_DASHBOARD Required. The slug of the dashboard the PANEL_GROUP associated with.

Dashboard sample configuration files

Find the following files in /etc/openstack-dashboard.

keystone_policy.json

The keystone_policy.json file defines additional access controls for the dashboard that apply to the Identity service.

Note

The keystone_policy.json file must match the Identity service /etc/keystone/policy.json policy file.

{
    "admin_required": [
        [
            "role:admin"
        ],
        [
            "is_admin:1"
        ]
    ],
    "service_role": [
        [
            "role:service"
        ]
    ],
    "service_or_admin": [
        [
            "rule:admin_required"
        ],
        [
            "rule:service_role"
        ]
    ],
    "owner": [
        [
            "user_id:%(user_id)s"
        ]
    ],
    "admin_or_owner": [
        [
            "rule:admin_required"
        ],
        [
            "rule:owner"
        ]
    ],
    "default": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_service": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_services": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_service": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_service": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_service": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_endpoint": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_endpoints": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_endpoint": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_endpoint": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_endpoint": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_domain": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_domains": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_domain": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_domain": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_domain": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_project": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_projects": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_user_projects": [
        [
            "rule:admin_or_owner"
        ]
    ],
    "identity:create_project": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_project": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_project": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_user": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_users": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_user": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_user": [
        [
            "rule:admin_or_owner"
        ]
    ],
    "identity:delete_user": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_groups": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_groups_for_user": [
        [
            "rule:admin_or_owner"
        ]
    ],
    "identity:create_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_users_in_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:remove_user_from_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:check_user_in_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:add_user_to_group": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_credential": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_credentials": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_credential": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_credential": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_credential": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_role": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_roles": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_role": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_role": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_role": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:check_grant": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_grants": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_grant": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:revoke_grant": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_role_assignments": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:get_policy": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:list_policies": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:create_policy": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:update_policy": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:delete_policy": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:check_token": [
        [
            "rule:admin_required"
        ]
    ],
    "identity:validate_token": [
        [
            "rule:service_or_admin"
        ]
    ],
    "identity:validate_token_head": [
        [
            "rule:service_or_admin"
        ]
    ],
    "identity:revocation_list": [
        [
            "rule:service_or_admin"
        ]
    ],
    "identity:revoke_token": [
        [
            "rule:admin_or_owner"
        ]
    ],
    "identity:create_trust": [
        [
            "user_id:%(trust.trustor_user_id)s"
        ]
    ],
    "identity:get_trust": [
        [
            "rule:admin_or_owner"
        ]
    ],
    "identity:list_trusts": [
        [
            "@"
        ]
    ],
    "identity:list_roles_for_trust": [
        [
            "@"
        ]
    ],
    "identity:check_role_for_trust": [
        [
            "@"
        ]
    ],
    "identity:get_role_for_trust": [
        [
            "@"
        ]
    ],
    "identity:delete_trust": [
        [
            "@"
        ]
    ]
}
nova_policy.json

The nova_policy.json file defines additional access controls for the dashboard that apply to the Compute service.

Note

The nova_policy.json file must match the Compute /etc/nova/policy.json policy file.

{
    "context_is_admin":  "role:admin",
    "admin_or_owner":  "is_admin:True or project_id:%(project_id)s",
    "default": "rule:admin_or_owner",

    "cells_scheduler_filter:TargetCellFilter": "is_admin:True",

    "compute:create": "",
    "compute:create:attach_network": "",
    "compute:create:attach_volume": "",
    "compute:create:forced_host": "is_admin:True",

    "compute:get": "",
    "compute:get_all": "",
    "compute:get_all_tenants": "is_admin:True",

    "compute:update": "",

    "compute:get_instance_metadata": "",
    "compute:get_all_instance_metadata": "",
    "compute:get_all_instance_system_metadata": "",
    "compute:update_instance_metadata": "",
    "compute:delete_instance_metadata": "",

    "compute:get_instance_faults": "",
    "compute:get_diagnostics": "",
    "compute:get_instance_diagnostics": "",

    "compute:start": "rule:admin_or_owner",
    "compute:stop": "rule:admin_or_owner",

    "compute:get_lock": "",
    "compute:lock": "rule:admin_or_owner",
    "compute:unlock": "rule:admin_or_owner",
    "compute:unlock_override": "rule:admin_api",

    "compute:get_vnc_console": "",
    "compute:get_spice_console": "",
    "compute:get_rdp_console": "",
    "compute:get_serial_console": "",
    "compute:get_mks_console": "",
    "compute:get_console_output": "",

    "compute:reset_network": "",
    "compute:inject_network_info": "",
    "compute:add_fixed_ip": "",
    "compute:remove_fixed_ip": "",

    "compute:attach_volume": "",
    "compute:detach_volume": "",
    "compute:swap_volume": "",

    "compute:attach_interface": "",
    "compute:detach_interface": "",

    "compute:set_admin_password": "",

    "compute:rescue": "",
    "compute:unrescue": "",

    "compute:suspend": "",
    "compute:resume": "",

    "compute:pause": "",
    "compute:unpause": "",

    "compute:shelve": "",
    "compute:shelve_offload": "",
    "compute:unshelve": "",

    "compute:snapshot": "",
    "compute:snapshot_volume_backed": "",
    "compute:backup": "",

    "compute:resize": "",
    "compute:confirm_resize": "",
    "compute:revert_resize": "",

    "compute:rebuild": "",
    "compute:reboot": "",
    "compute:delete": "rule:admin_or_owner",
    "compute:soft_delete": "rule:admin_or_owner",
    "compute:force_delete": "rule:admin_or_owner",

    "compute:security_groups:add_to_instance": "",
    "compute:security_groups:remove_from_instance": "",

    "compute:delete": "",
    "compute:soft_delete": "",
    "compute:force_delete": "",
    "compute:restore": "",

    "compute:volume_snapshot_create": "",
    "compute:volume_snapshot_delete": "",

    "admin_api": "is_admin:True",
    "compute_extension:accounts": "rule:admin_api",
    "compute_extension:admin_actions": "rule:admin_api",
    "compute_extension:admin_actions:pause": "rule:admin_or_owner",
    "compute_extension:admin_actions:unpause": "rule:admin_or_owner",
    "compute_extension:admin_actions:suspend": "rule:admin_or_owner",
    "compute_extension:admin_actions:resume": "rule:admin_or_owner",
    "compute_extension:admin_actions:lock": "rule:admin_or_owner",
    "compute_extension:admin_actions:unlock": "rule:admin_or_owner",
    "compute_extension:admin_actions:resetNetwork": "rule:admin_api",
    "compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
    "compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
    "compute_extension:admin_actions:migrateLive": "rule:admin_api",
    "compute_extension:admin_actions:resetState": "rule:admin_api",
    "compute_extension:admin_actions:migrate": "rule:admin_api",
    "compute_extension:aggregates": "rule:admin_api",
    "compute_extension:agents": "rule:admin_api",
    "compute_extension:attach_interfaces": "",
    "compute_extension:baremetal_nodes": "rule:admin_api",
    "compute_extension:cells": "rule:admin_api",
    "compute_extension:cells:create": "rule:admin_api",
    "compute_extension:cells:delete": "rule:admin_api",
    "compute_extension:cells:update": "rule:admin_api",
    "compute_extension:cells:sync_instances": "rule:admin_api",
    "compute_extension:certificates": "",
    "compute_extension:cloudpipe": "rule:admin_api",
    "compute_extension:cloudpipe_update": "rule:admin_api",
    "compute_extension:config_drive": "",
    "compute_extension:console_output": "",
    "compute_extension:consoles": "",
    "compute_extension:createserverext": "",
    "compute_extension:deferred_delete": "",
    "compute_extension:disk_config": "",
    "compute_extension:evacuate": "rule:admin_api",
    "compute_extension:extended_server_attributes": "rule:admin_api",
    "compute_extension:extended_status": "",
    "compute_extension:extended_availability_zone": "",
    "compute_extension:extended_ips": "",
    "compute_extension:extended_ips_mac": "",
    "compute_extension:extended_vif_net": "",
    "compute_extension:extended_volumes": "",
    "compute_extension:fixed_ips": "rule:admin_api",
    "compute_extension:flavor_access": "",
    "compute_extension:flavor_access:addTenantAccess": "rule:admin_api",
    "compute_extension:flavor_access:removeTenantAccess": "rule:admin_api",
    "compute_extension:flavor_disabled": "",
    "compute_extension:flavor_rxtx": "",
    "compute_extension:flavor_swap": "",
    "compute_extension:flavorextradata": "",
    "compute_extension:flavorextraspecs:index": "",
    "compute_extension:flavorextraspecs:show": "",
    "compute_extension:flavorextraspecs:create": "rule:admin_api",
    "compute_extension:flavorextraspecs:update": "rule:admin_api",
    "compute_extension:flavorextraspecs:delete": "rule:admin_api",
    "compute_extension:flavormanage": "rule:admin_api",
    "compute_extension:floating_ip_dns": "",
    "compute_extension:floating_ip_pools": "",
    "compute_extension:floating_ips": "",
    "compute_extension:floating_ips_bulk": "rule:admin_api",
    "compute_extension:fping": "",
    "compute_extension:fping:all_tenants": "rule:admin_api",
    "compute_extension:hide_server_addresses": "is_admin:False",
    "compute_extension:hosts": "rule:admin_api",
    "compute_extension:hypervisors": "rule:admin_api",
    "compute_extension:image_size": "",
    "compute_extension:instance_actions": "",
    "compute_extension:instance_actions:events": "rule:admin_api",
    "compute_extension:instance_usage_audit_log": "rule:admin_api",
    "compute_extension:keypairs": "",
    "compute_extension:keypairs:index": "",
    "compute_extension:keypairs:show": "",
    "compute_extension:keypairs:create": "",
    "compute_extension:keypairs:delete": "",
    "compute_extension:multinic": "",
    "compute_extension:networks": "rule:admin_api",
    "compute_extension:networks:view": "",
    "compute_extension:networks_associate": "rule:admin_api",
    "compute_extension:os-tenant-networks": "",
    "compute_extension:quotas:show": "",
    "compute_extension:quotas:update": "rule:admin_api",
    "compute_extension:quotas:delete": "rule:admin_api",
    "compute_extension:quota_classes": "",
    "compute_extension:rescue": "",
    "compute_extension:security_group_default_rules": "rule:admin_api",
    "compute_extension:security_groups": "",
    "compute_extension:server_diagnostics": "rule:admin_api",
    "compute_extension:server_groups": "",
    "compute_extension:server_password": "",
    "compute_extension:server_usage": "",
    "compute_extension:services": "rule:admin_api",
    "compute_extension:shelve": "",
    "compute_extension:shelveOffload": "rule:admin_api",
    "compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
    "compute_extension:simple_tenant_usage:list": "rule:admin_api",
    "compute_extension:unshelve": "",
    "compute_extension:users": "rule:admin_api",
    "compute_extension:virtual_interfaces": "",
    "compute_extension:virtual_storage_arrays": "",
    "compute_extension:volumes": "",
    "compute_extension:volume_attachments:index": "",
    "compute_extension:volume_attachments:show": "",
    "compute_extension:volume_attachments:create": "",
    "compute_extension:volume_attachments:update": "",
    "compute_extension:volume_attachments:delete": "",
    "compute_extension:volumetypes": "",
    "compute_extension:availability_zone:list": "",
    "compute_extension:availability_zone:detail": "rule:admin_api",
    "compute_extension:used_limits_for_admin": "rule:admin_api",
    "compute_extension:migrations:index": "rule:admin_api",
    "compute_extension:os-assisted-volume-snapshots:create": "rule:admin_api",
    "compute_extension:os-assisted-volume-snapshots:delete": "rule:admin_api",
    "compute_extension:console_auth_tokens": "rule:admin_api",
    "compute_extension:os-server-external-events:create": "rule:admin_api",

    "network:get_all": "",
    "network:get": "",
    "network:create": "",
    "network:delete": "",
    "network:associate": "",
    "network:disassociate": "",
    "network:get_vifs_by_instance": "",
    "network:allocate_for_instance": "",
    "network:deallocate_for_instance": "",
    "network:validate_networks": "",
    "network:get_instance_uuids_by_ip_filter": "",
    "network:get_instance_id_by_floating_address": "",
    "network:setup_networks_on_host": "",
    "network:get_backdoor_port": "",

    "network:get_floating_ip": "",
    "network:get_floating_ip_pools": "",
    "network:get_floating_ip_by_address": "",
    "network:get_floating_ips_by_project": "",
    "network:get_floating_ips_by_fixed_address": "",
    "network:allocate_floating_ip": "",
    "network:associate_floating_ip": "",
    "network:disassociate_floating_ip": "",
    "network:release_floating_ip": "",
    "network:migrate_instance_start": "",
    "network:migrate_instance_finish": "",

    "network:get_fixed_ip": "",
    "network:get_fixed_ip_by_address": "",
    "network:add_fixed_ip_to_instance": "",
    "network:remove_fixed_ip_from_instance": "",
    "network:add_network_to_project": "",
    "network:get_instance_nw_info": "",

    "network:get_dns_domains": "",
    "network:add_dns_entry": "",
    "network:modify_dns_entry": "",
    "network:delete_dns_entry": "",
    "network:get_dns_entries_by_address": "",
    "network:get_dns_entries_by_name": "",
    "network:create_private_dns_domain": "",
    "network:create_public_dns_domain": "",
    "network:delete_dns_domain": "",
    "network:attach_external_network": "rule:admin_api",
    "network:get_vif_by_mac_address": "",

    "os_compute_api:servers:detail:get_all_tenants": "is_admin:True",
    "os_compute_api:servers:index:get_all_tenants": "is_admin:True",
    "os_compute_api:servers:confirm_resize": "",
    "os_compute_api:servers:create": "",
    "os_compute_api:servers:create:attach_network": "",
    "os_compute_api:servers:create:attach_volume": "",
    "os_compute_api:servers:create:forced_host": "rule:admin_api",
    "os_compute_api:servers:delete": "",
    "os_compute_api:servers:update": "",
    "os_compute_api:servers:detail": "",
    "os_compute_api:servers:index": "",
    "os_compute_api:servers:reboot": "",
    "os_compute_api:servers:rebuild": "",
    "os_compute_api:servers:resize": "",
    "os_compute_api:servers:revert_resize": "",
    "os_compute_api:servers:show": "",
    "os_compute_api:servers:create_image": "",
    "os_compute_api:servers:create_image:allow_volume_backed": "",
    "os_compute_api:servers:start": "rule:admin_or_owner",
    "os_compute_api:servers:stop": "rule:admin_or_owner",
    "os_compute_api:os-access-ips:discoverable": "",
    "os_compute_api:os-access-ips": "",
    "os_compute_api:os-admin-actions": "rule:admin_api",
    "os_compute_api:os-admin-actions:discoverable": "",
    "os_compute_api:os-admin-actions:reset_network": "rule:admin_api",
    "os_compute_api:os-admin-actions:inject_network_info": "rule:admin_api",
    "os_compute_api:os-admin-actions:reset_state": "rule:admin_api",
    "os_compute_api:os-admin-password": "",
    "os_compute_api:os-admin-password:discoverable": "",
    "os_compute_api:os-aggregates:discoverable": "",
    "os_compute_api:os-aggregates:index": "rule:admin_api",
    "os_compute_api:os-aggregates:create": "rule:admin_api",
    "os_compute_api:os-aggregates:show": "rule:admin_api",
    "os_compute_api:os-aggregates:update": "rule:admin_api",
    "os_compute_api:os-aggregates:delete": "rule:admin_api",
    "os_compute_api:os-aggregates:add_host": "rule:admin_api",
    "os_compute_api:os-aggregates:remove_host": "rule:admin_api",
    "os_compute_api:os-aggregates:set_metadata": "rule:admin_api",
    "os_compute_api:os-agents": "rule:admin_api",
    "os_compute_api:os-agents:discoverable": "",
    "os_compute_api:os-attach-interfaces": "",
    "os_compute_api:os-attach-interfaces:discoverable": "",
    "os_compute_api:os-baremetal-nodes": "rule:admin_api",
    "os_compute_api:os-baremetal-nodes:discoverable": "",
    "os_compute_api:os-block-device-mapping-v1:discoverable": "",
    "os_compute_api:os-cells": "rule:admin_api",
    "os_compute_api:os-cells:create": "rule:admin_api",
    "os_compute_api:os-cells:delete": "rule:admin_api",
    "os_compute_api:os-cells:update": "rule:admin_api",
    "os_compute_api:os-cells:sync_instances": "rule:admin_api",
    "os_compute_api:os-cells:discoverable": "",
    "os_compute_api:os-certificates:create": "",
    "os_compute_api:os-certificates:show": "",
    "os_compute_api:os-certificates:discoverable": "",
    "os_compute_api:os-cloudpipe": "rule:admin_api",
    "os_compute_api:os-cloudpipe:discoverable": "",
    "os_compute_api:os-config-drive": "",
    "os_compute_api:os-consoles:discoverable": "",
    "os_compute_api:os-consoles:create": "",
    "os_compute_api:os-consoles:delete": "",
    "os_compute_api:os-consoles:index": "",
    "os_compute_api:os-consoles:show": "",
    "os_compute_api:os-console-output:discoverable": "",
    "os_compute_api:os-console-output": "",
    "os_compute_api:os-remote-consoles": "",
    "os_compute_api:os-remote-consoles:discoverable": "",
    "os_compute_api:os-create-backup:discoverable": "",
    "os_compute_api:os-create-backup": "rule:admin_or_owner",
    "os_compute_api:os-deferred-delete": "",
    "os_compute_api:os-deferred-delete:discoverable": "",
    "os_compute_api:os-disk-config": "",
    "os_compute_api:os-disk-config:discoverable": "",
    "os_compute_api:os-evacuate": "rule:admin_api",
    "os_compute_api:os-evacuate:discoverable": "",
    "os_compute_api:os-extended-server-attributes": "rule:admin_api",
    "os_compute_api:os-extended-server-attributes:discoverable": "",
    "os_compute_api:os-extended-status": "",
    "os_compute_api:os-extended-status:discoverable": "",
    "os_compute_api:os-extended-availability-zone": "",
    "os_compute_api:os-extended-availability-zone:discoverable": "",
    "os_compute_api:extensions": "",
    "os_compute_api:extension_info:discoverable": "",
    "os_compute_api:os-extended-volumes": "",
    "os_compute_api:os-extended-volumes:discoverable": "",
    "os_compute_api:os-fixed-ips": "rule:admin_api",
    "os_compute_api:os-fixed-ips:discoverable": "",
    "os_compute_api:os-flavor-access": "",
    "os_compute_api:os-flavor-access:discoverable": "",
    "os_compute_api:os-flavor-access:remove_tenant_access": "rule:admin_api",
    "os_compute_api:os-flavor-access:add_tenant_access": "rule:admin_api",
    "os_compute_api:os-flavor-rxtx": "",
    "os_compute_api:os-flavor-rxtx:discoverable": "",
    "os_compute_api:flavors:discoverable": "",
    "os_compute_api:os-flavor-extra-specs:discoverable": "",
    "os_compute_api:os-flavor-extra-specs:index": "",
    "os_compute_api:os-flavor-extra-specs:show": "",
    "os_compute_api:os-flavor-extra-specs:create": "rule:admin_api",
    "os_compute_api:os-flavor-extra-specs:update": "rule:admin_api",
    "os_compute_api:os-flavor-extra-specs:delete": "rule:admin_api",
    "os_compute_api:os-flavor-manage:discoverable": "",
    "os_compute_api:os-flavor-manage": "rule:admin_api",
    "os_compute_api:os-floating-ip-dns": "",
    "os_compute_api:os-floating-ip-dns:discoverable": "",
    "os_compute_api:os-floating-ip-dns:domain:update": "rule:admin_api",
    "os_compute_api:os-floating-ip-dns:domain:delete": "rule:admin_api",
    "os_compute_api:os-floating-ip-pools": "",
    "os_compute_api:os-floating-ip-pools:discoverable": "",
    "os_compute_api:os-floating-ips": "",
    "os_compute_api:os-floating-ips:discoverable": "",
    "os_compute_api:os-floating-ips-bulk": "rule:admin_api",
    "os_compute_api:os-floating-ips-bulk:discoverable": "",
    "os_compute_api:os-fping": "",
    "os_compute_api:os-fping:discoverable": "",
    "os_compute_api:os-fping:all_tenants": "rule:admin_api",
    "os_compute_api:os-hide-server-addresses": "is_admin:False",
    "os_compute_api:os-hide-server-addresses:discoverable": "",
    "os_compute_api:os-hosts": "rule:admin_api",
    "os_compute_api:os-hosts:discoverable": "",
    "os_compute_api:os-hypervisors": "rule:admin_api",
    "os_compute_api:os-hypervisors:discoverable": "",
    "os_compute_api:images:discoverable": "",
    "os_compute_api:image-size": "",
    "os_compute_api:image-size:discoverable": "",
    "os_compute_api:os-instance-actions": "",
    "os_compute_api:os-instance-actions:discoverable": "",
    "os_compute_api:os-instance-actions:events": "rule:admin_api",
    "os_compute_api:os-instance-usage-audit-log": "rule:admin_api",
    "os_compute_api:os-instance-usage-audit-log:discoverable": "",
    "os_compute_api:ips:discoverable": "",
    "os_compute_api:ips:index": "rule:admin_or_owner",
    "os_compute_api:ips:show": "rule:admin_or_owner",
    "os_compute_api:os-keypairs:discoverable": "",
    "os_compute_api:os-keypairs": "",
    "os_compute_api:os-keypairs:index": "rule:admin_api or user_id:%(user_id)s",
    "os_compute_api:os-keypairs:show": "rule:admin_api or user_id:%(user_id)s",
    "os_compute_api:os-keypairs:create": "rule:admin_api or user_id:%(user_id)s",
    "os_compute_api:os-keypairs:delete": "rule:admin_api or user_id:%(user_id)s",
    "os_compute_api:limits:discoverable": "",
    "os_compute_api:limits": "",
    "os_compute_api:os-lock-server:discoverable": "",
    "os_compute_api:os-lock-server:lock": "rule:admin_or_owner",
    "os_compute_api:os-lock-server:unlock": "rule:admin_or_owner",
    "os_compute_api:os-lock-server:unlock:unlock_override": "rule:admin_api",
    "os_compute_api:os-migrate-server:discoverable": "",
    "os_compute_api:os-migrate-server:migrate": "rule:admin_api",
    "os_compute_api:os-migrate-server:migrate_live": "rule:admin_api",
    "os_compute_api:os-multinic": "",
    "os_compute_api:os-multinic:discoverable": "",
    "os_compute_api:os-networks": "rule:admin_api",
    "os_compute_api:os-networks:view": "",
    "os_compute_api:os-networks:discoverable": "",
    "os_compute_api:os-networks-associate": "rule:admin_api",
    "os_compute_api:os-networks-associate:discoverable": "",
    "os_compute_api:os-pause-server:discoverable": "",
    "os_compute_api:os-pause-server:pause": "rule:admin_or_owner",
    "os_compute_api:os-pause-server:unpause": "rule:admin_or_owner",
    "os_compute_api:os-pci:pci_servers": "",
    "os_compute_api:os-pci:discoverable": "",
    "os_compute_api:os-pci:index": "rule:admin_api",
    "os_compute_api:os-pci:detail": "rule:admin_api",
    "os_compute_api:os-pci:show": "rule:admin_api",
    "os_compute_api:os-personality:discoverable": "",
    "os_compute_api:os-preserve-ephemeral-rebuild:discoverable": "",
    "os_compute_api:os-quota-sets:discoverable": "",
    "os_compute_api:os-quota-sets:show": "rule:admin_or_owner",
    "os_compute_api:os-quota-sets:defaults": "",
    "os_compute_api:os-quota-sets:update": "rule:admin_api",
    "os_compute_api:os-quota-sets:delete": "rule:admin_api",
    "os_compute_api:os-quota-sets:detail": "rule:admin_api",
    "os_compute_api:os-quota-class-sets:update": "rule:admin_api",
    "os_compute_api:os-quota-class-sets:show": "is_admin:True or quota_class:%(quota_class)s",
    "os_compute_api:os-quota-class-sets:discoverable": "",
    "os_compute_api:os-rescue": "",
    "os_compute_api:os-rescue:discoverable": "",
    "os_compute_api:os-scheduler-hints:discoverable": "",
    "os_compute_api:os-security-group-default-rules:discoverable": "",
    "os_compute_api:os-security-group-default-rules": "rule:admin_api",
    "os_compute_api:os-security-groups": "",
    "os_compute_api:os-security-groups:discoverable": "",
    "os_compute_api:os-server-diagnostics": "rule:admin_api",
    "os_compute_api:os-server-diagnostics:discoverable": "",
    "os_compute_api:os-server-password": "",
    "os_compute_api:os-server-password:discoverable": "",
    "os_compute_api:os-server-usage": "",
    "os_compute_api:os-server-usage:discoverable": "",
    "os_compute_api:os-server-groups": "",
    "os_compute_api:os-server-groups:discoverable": "",
    "os_compute_api:os-services": "rule:admin_api",
    "os_compute_api:os-services:discoverable": "",
    "os_compute_api:server-metadata:discoverable": "",
    "os_compute_api:server-metadata:index": "rule:admin_or_owner",
    "os_compute_api:server-metadata:show": "rule:admin_or_owner",
    "os_compute_api:server-metadata:delete": "rule:admin_or_owner",
    "os_compute_api:server-metadata:create": "rule:admin_or_owner",
    "os_compute_api:server-metadata:update": "rule:admin_or_owner",
    "os_compute_api:server-metadata:update_all": "rule:admin_or_owner",
    "os_compute_api:servers:discoverable": "",
    "os_compute_api:os-shelve:shelve": "",
    "os_compute_api:os-shelve:shelve:discoverable": "",
    "os_compute_api:os-shelve:shelve_offload": "rule:admin_api",
    "os_compute_api:os-simple-tenant-usage:discoverable": "",
    "os_compute_api:os-simple-tenant-usage:show": "rule:admin_or_owner",
    "os_compute_api:os-simple-tenant-usage:list": "rule:admin_api",
    "os_compute_api:os-suspend-server:discoverable": "",
    "os_compute_api:os-suspend-server:suspend": "rule:admin_or_owner",
    "os_compute_api:os-suspend-server:resume": "rule:admin_or_owner",
    "os_compute_api:os-tenant-networks": "rule:admin_or_owner",
    "os_compute_api:os-tenant-networks:discoverable": "",
    "os_compute_api:os-shelve:unshelve": "",
    "os_compute_api:os-user-data:discoverable": "",
    "os_compute_api:os-virtual-interfaces": "",
    "os_compute_api:os-virtual-interfaces:discoverable": "",
    "os_compute_api:os-volumes": "",
    "os_compute_api:os-volumes:discoverable": "",
    "os_compute_api:os-volumes-attachments:index": "",
    "os_compute_api:os-volumes-attachments:show": "",
    "os_compute_api:os-volumes-attachments:create": "",
    "os_compute_api:os-volumes-attachments:update": "",
    "os_compute_api:os-volumes-attachments:delete": "",
    "os_compute_api:os-volumes-attachments:discoverable": "",
    "os_compute_api:os-availability-zone:list": "",
    "os_compute_api:os-availability-zone:discoverable": "",
    "os_compute_api:os-availability-zone:detail": "rule:admin_api",
    "os_compute_api:os-used-limits": "rule:admin_api",
    "os_compute_api:os-used-limits:discoverable": "",
    "os_compute_api:os-migrations:index": "rule:admin_api",
    "os_compute_api:os-migrations:discoverable": "",
    "os_compute_api:os-assisted-volume-snapshots:create": "rule:admin_api",
    "os_compute_api:os-assisted-volume-snapshots:delete": "rule:admin_api",
    "os_compute_api:os-assisted-volume-snapshots:discoverable": "",
    "os_compute_api:os-console-auth-tokens": "rule:admin_api",
    "os_compute_api:os-server-external-events:create": "rule:admin_api"
}

Dashboard log files

The dashboard is served to users through the Apache HTTP Server(httpd).

As a result, dashboard-related logs appear in files in the /var/log/httpd or /var/log/apache2 directory on the system where the dashboard is hosted. The following table describes these files:

Dashboard and httpd log files
Log file Description
access_log Logs all attempts to access the web server.
error_log Logs all unsuccessful attempts to access the web server, along with the reason that each attempt failed.
/var/log/horizon/horizon.log Log of certain user interactions

This chapter describes how to configure the Dashboard with Apache web server.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Data Processing service

Data Processing API configuraton

The following options allow configuration of the APIs that Data Processing service supports.

Description of API configuration options
Configuration option = Default value Description
[oslo_messaging_rabbit]  
connection_factory = single (String) Connection factory implementation
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[retries]  
retries_number = 5 (Integer) Number of times to retry the request to client before failing
retry_after = 10 (Integer) Time between the retries to client (in seconds).

Additional configuration options for Data Processing service

The following tables provide a comprehensive list of the Data Processing service configuration options:

Description of clients configuration options
Configuration option = Default value Description
[cinder]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to cinder.
api_version = 2 (Integer) Version of the Cinder API to use.
ca_file = None (String) Location of ca certificates file to use for cinder client requests.
endpoint_type = internalURL (String) Endpoint type for cinder client requests
[glance]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to glance.
ca_file = None (String) Location of ca certificates file to use for glance client requests.
endpoint_type = internalURL (String) Endpoint type for glance client requests
[heat]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to heat.
ca_file = None (String) Location of ca certificates file to use for heat client requests.
endpoint_type = internalURL (String) Endpoint type for heat client requests
[keystone]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to keystone.
ca_file = None (String) Location of ca certificates file to use for keystone client requests.
endpoint_type = internalURL (String) Endpoint type for keystone client requests
[manila]  
api_insecure = True (Boolean) Allow to perform insecure SSL requests to manila.
api_version = 1 (Integer) Version of the manila API to use.
ca_file = None (String) Location of ca certificates file to use for manila client requests.
[neutron]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to neutron.
ca_file = None (String) Location of ca certificates file to use for neutron client requests.
endpoint_type = internalURL (String) Endpoint type for neutron client requests
[nova]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to nova.
ca_file = None (String) Location of ca certificates file to use for nova client requests.
endpoint_type = internalURL (String) Endpoint type for nova client requests
[swift]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to swift.
ca_file = None (String) Location of ca certificates file to use for swift client requests.
endpoint_type = internalURL (String) Endpoint type for swift client requests
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_project_domain_name = default (String) The name of the domain for the service project(ex. tenant).
admin_user_domain_name = default (String) The name of the domain to which the admin user belongs.
api_workers = 1 (Integer) Number of workers for Sahara API service (0 means all-in-one-thread configuration).
cleanup_time_for_incomplete_clusters = 0 (Integer) Maximal time (in hours) for clusters allowed to be in states other than “Active”, “Deleting” or “Error”. If a cluster is not in “Active”, “Deleting” or “Error” state and last update of it was longer than “cleanup_time_for_incomplete_clusters” hours ago then it will be deleted automatically. (0 value means that automatic clean up is disabled).
cluster_remote_threshold = 70 (Integer) The same as global_remote_threshold, but for a single cluster.
compute_topology_file = etc/sahara/compute.topology (String) File with nova compute topology. It should contain mapping between nova computes and racks.
coordinator_heartbeat_interval = 1 (Integer) Interval size between heartbeat execution in seconds. Heartbeats are executed to make sure that connection to the coordination server is active.
default_ntp_server = pool.ntp.org (String) Default ntp server for time sync
disable_event_log = False (Boolean) Disables event log feature.
edp_internal_db_enabled = True (Boolean) Use Sahara internal db to store job binaries.
enable_data_locality = False (Boolean) Enables data locality for hadoop cluster. Also enables data locality for Swift used by hadoop. If enabled, ‘compute_topology’ and ‘swift_topology’ configuration parameters should point to OpenStack and Swift topology correspondingly.
enable_hypervisor_awareness = True (Boolean) Enables four-level topology for data locality. Works only if corresponding plugin supports such mode.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
global_remote_threshold = 100 (Integer) Maximum number of remote operations that will be running at the same time. Note that each remote operation requires its own process to run.
hash_ring_replicas_count = 40 (Integer) Number of points that belongs to each member on a hash ring. The larger number leads to a better distribution.
heat_enable_wait_condition = True (Boolean) Enable wait condition feature to reduce polling during cluster creation
heat_stack_tags = data-processing-cluster (List) List of tags to be used during operating with stack.
image = None (String) The path to an image to modify. This image will be modified in-place: be sure to target a copy if you wish to maintain a clean master image.
job_binary_max_KB = 5120 (Integer) Maximum length of job binary data in kilobytes that may be stored or retrieved in a single operation.
job_canceling_timeout = 300 (Integer) Timeout for canceling job execution (in seconds). Sahara will try to cancel job execution during this time.
job_workflow_postfix = (String) Postfix for storing jobs in hdfs. Will be added to ‘/user/<hdfs user>/’ path.
min_transient_cluster_active_time = 30 (Integer) Minimal “lifetime” in seconds for a transient cluster. Cluster is guaranteed to be “alive” within this time period.
nameservers = (List) IP addresses of Designate nameservers. This is required if ‘use_designate’ is True
node_domain = novalocal (String) The suffix of the node’s FQDN. In nova-network that is the dhcp_domain config parameter.
os_region_name = None (String) Region name used to get services endpoints.
periodic_coordinator_backend_url = None (String) The backend URL to use for distributed periodic tasks coordination.
periodic_enable = True (Boolean) Enable periodic tasks.
periodic_fuzzy_delay = 60 (Integer) Range in seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0).
periodic_interval_max = 60 (Integer) Max interval size between periodic tasks execution in seconds.
periodic_workers_number = 1 (Integer) Number of threads to run periodic tasks.
plugins = vanilla, spark, cdh, ambari, storm, mapr (List) List of plugins to be loaded. Sahara preserves the order of the list when returning it.
proxy_command = (String) Proxy command used to connect to instances. If set, this command should open a netcat socket, that Sahara will use for SSH and HTTP connections. Use {host} and {port} to describe the destination. Other available keywords: {tenant_id}, {network_id}, {router_id}.
remote = ssh (String) A method for Sahara to execute commands on VMs.
root_fs = None (String) The filesystem to mount as the root volume on the image. Novalue is required if only one filesystem is detected.
rootwrap_command = sudo sahara-rootwrap /etc/sahara/rootwrap.conf (String) Rootwrap command to leverage. Use in conjunction with use_rootwrap=True
swift_topology_file = etc/sahara/swift.topology (String) File with Swift topology.It should contain mapping between Swift nodes and racks.
test_only = False (Boolean) If this flag is set, no changes will be made to the image; instead, the script will fail if discrepancies are found between the image and the intended state.
use_barbican_key_manager = False (Boolean) Enable the usage of the OpenStack Key Management service provided by barbican.
use_designate = False (Boolean) Use Designate for internal and external hostnames resolution
use_floating_ips = True (Boolean) If set to True, Sahara will use floating IPs to communicate with instances. To make sure that all instances have floating IPs assigned in Nova Network set “auto_assign_floating_ip=True” in nova.conf. If Neutron is used for networking, make sure that all Node Groups have “floating_ip_pool” parameter defined.
use_identity_api_v3 = True (Boolean) Enables Sahara to use Keystone API v3. If that flag is disabled, per-job clusters will not be terminated automatically.
use_namespaces = False (Boolean) Use network namespaces for communication (only valid to use in conjunction with use_neutron=True).
use_neutron = False (Boolean) Use Neutron Networking (False indicates the use of Nova networking).
use_rootwrap = False (Boolean) Use rootwrap facility to allow non-root users to run the sahara services and access private network IPs (only valid to use in conjunction with use_namespaces=True)
use_router_proxy = True (Boolean) Use ROUTER remote proxy.
[castellan]  
barbican_api_endpoint = None (String) The endpoint to use for connecting to the barbican api controller. By default, castellan will use the URL from the service catalog.
barbican_api_version = v1 (String) Version of the barbican API, for example: “v1”
[cluster_verifications]  
verification_enable = True (Boolean) Option to enable verifications for all clusters
verification_periodic_interval = 600 (Integer) Interval between two consecutive periodic tasks forverifications, in seconds.
[conductor]  
use_local = True (Boolean) Perform sahara-conductor operations locally.
Description of domain configuration options
Configuration option = Default value Description
[DEFAULT]  
proxy_user_domain_name = None (String) The domain Sahara will use to create new proxy users for Swift object access.
proxy_user_role_names = Member (List) A list of the role names that the proxy user should assume through trust for Swift object access.
use_domain_for_proxy_users = False (Boolean) Enables Sahara to use a domain for creating temporary proxy users to access Swift. If this is enabled a domain must be created for Sahara to use.
Description of Auth options for Swift access for VM configuration options
Configuration option = Default value Description
[object_store_access]  
public_identity_ca_file = None (String) Location of ca certificate file to use for identity client requests via public endpoint
public_object_store_ca_file = None (String) Location of ca certificate file to use for object-store client requests via public endpoint
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of timeouts configuration options
Configuration option = Default value Description
[timeouts]  
delete_instances_timeout = 10800 (Integer) Wait for instances to be deleted, in seconds
detach_volume_timeout = 300 (Integer) Timeout for detaching volumes from instance, in seconds
ips_assign_timeout = 10800 (Integer) Assign IPs timeout, in seconds
wait_until_accessible = 10800 (Integer) Wait for instance accessibility, in seconds

New, updated, and deprecated options in Newton for Data Processing service

New options
Option = default value (Type) Help string
[DEFAULT] edp_internal_db_enabled = True (BoolOpt) Use Sahara internal db to store job binaries.
[DEFAULT] image = None (StrOpt) The path to an image to modify. This image will be modified in-place: be sure to target a copy if you wish to maintain a clean master image.
[DEFAULT] nameservers = (ListOpt) IP addresses of Designate nameservers. This is required if ‘use_designate’ is True
[DEFAULT] root_fs = None (StrOpt) The filesystem to mount as the root volume on the image. Novalue is required if only one filesystem is detected.
[DEFAULT] test_only = False (BoolOpt) If this flag is set, no changes will be made to the image; instead, the script will fail if discrepancies are found between the image and the intended state.
[DEFAULT] use_designate = False (BoolOpt) Use Designate for internal and external hostnames resolution
[glance] api_insecure = False (BoolOpt) Allow to perform insecure SSL requests to glance.
[glance] ca_file = None (StrOpt) Location of ca certificates file to use for glance client requests.
[glance] endpoint_type = internalURL (StrOpt) Endpoint type for glance client requests
New default values
Option Previous default value New default value
[DEFAULT] plugins vanilla, spark, cdh, ambari vanilla, spark, cdh, ambari, storm, mapr
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None

The Data Processing service (sahara) provides a scalable data-processing stack and associated management interfaces.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Database service

Configure the database

Use the options to configure the used databases:

Description of Cassandra database configuration options
Configuration option = Default value Description
[cassandra]  
api_strategy = trove.common.strategies.cluster.experimental.cassandra.api.CassandraAPIStrategy (String) Class that implements datastore-specific API logic.
backup_incremental_strategy = {} (Dict) Incremental strategy based on the default backup strategy. For strategies that do not implement incremental backups, the runner performs full backup instead.
backup_namespace = trove.guestagent.strategies.backup.experimental.cassandra_impl (String) Namespace to load backup strategies from.
backup_strategy = NodetoolSnapshot (String) Default strategy to perform backups.
cluster_support = True (Boolean) Enable clusters to be created and managed.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = system (String) List of Guest Logs to expose for publishing.
guestagent_strategy = trove.common.strategies.cluster.experimental.cassandra.guestagent.CassandraGuestAgentStrategy (String) Class that implements datastore-specific Guest Agent API logic.
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = system, system_auth, system_traces (List) Databases to exclude when listing databases.
ignore_users = os_admin (List) Users to exclude when listing users.
mount_point = /var/lib/cassandra (String) Filesystem path for mounting volumes if volume support is enabled.
replication_strategy = None (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.cassandra_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.cassandra.service.CassandraRootController (String) Root controller implementation for Cassandra.
system_log_level = INFO (String) Cassandra log verbosity.
taskmanager_strategy = trove.common.strategies.cluster.experimental.cassandra.taskmanager.CassandraTaskManagerStrategy (String) Class that implements datastore-specific task manager logic.
tcp_ports = 7000, 7001, 7199, 9042, 9160 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of Couchbase database configuration options
Configuration option = Default value Description
[couchbase]  
backup_incremental_strategy = {} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.experimental.couchbase_impl (String) Namespace to load backup strategies from.
backup_strategy = CbBackup (String) Default strategy to perform backups.
default_password_length = 24 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = (String) List of Guest Logs to expose for publishing.
icmp = False (Boolean) Whether to permit ICMP.
mount_point = /var/lib/couchbase (String) Filesystem path for mounting volumes if volume support is enabled.
replication_strategy = None (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.couchbase_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.common.service.DefaultRootController (String) Root controller implementation for couchbase.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field.
tcp_ports = 8091, 8092, 4369, 11209-11211, 21100-21199 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
[couchdb]  
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of CouchDB database configuration options
Configuration option = Default value Description
[couchdb]  
backup_incremental_strategy = {} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.experimental.couchdb_impl (String) Namespace to load backup strategies from.
backup_strategy = CouchDBBackup (String) Default strategy to perform backups.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = (String) List of Guest Logs to expose for publishing.
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = _users, _replicator (List) Databases to exclude when listing databases.
ignore_users = os_admin, root (List) Users to exclude when listing users.
mount_point = /var/lib/couchdb (String) Filesystem path for mounting volumes if volume support is enabled.
replication_strategy = None (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.couchdb_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.common.service.DefaultRootController (String) Root controller implementation for couchdb.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the “password” field.
tcp_ports = 5984 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
Description of DB2 database configuration options
Configuration option = Default value Description
[db2]  
backup_incremental_strategy = {} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.experimental.db2_impl (String) Namespace to load backup strategies from.
backup_strategy = DB2OfflineBackup (String) Default strategy to perform backups.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = (String) List of Guest Logs to expose for publishing.
icmp = False (Boolean) Whether to permit ICMP.
ignore_users = PUBLIC, DB2INST1 (List) No help text available for this option.
mount_point = /home/db2inst1/db2inst1 (String) Filesystem path for mounting volumes if volume support is enabled.
replication_strategy = None (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.db2_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.common.service.DefaultRootController (String) Root controller implementation for db2.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field.
tcp_ports = 50000 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of MariaDB database configuration options
Configuration option = Default value Description
[mariadb]  
api_strategy = trove.common.strategies.cluster.experimental.galera_common.api.GaleraCommonAPIStrategy (String) Class that implements datastore-specific API logic.
backup_incremental_strategy = {'MariaDBInnoBackupEx': 'MariaDBInnoBackupExIncremental'} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.experimental.mariadb_impl (String) Namespace to load backup strategies from.
backup_strategy = MariaDBInnoBackupEx (String) Default strategy to perform backups.
cluster_support = True (Boolean) Enable clusters to be created and managed.
default_password_length = ${mysql.default_password_length} (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = general,slow_query (String) List of Guest Logs to expose for publishing.
guest_log_long_query_time = 1000 (Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time
guestagent_strategy = trove.common.strategies.cluster.experimental.galera_common.guestagent.GaleraCommonGuestAgentStrategy (String) Class that implements datastore-specific Guest Agent API logic.
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = mysql, information_schema, performance_schema (List) Databases to exclude when listing databases.
ignore_users = os_admin, root (List) Users to exclude when listing users.
min_cluster_member_count = 3 (Integer) Minimum number of members in MariaDB cluster.
mount_point = /var/lib/mysql (String) Filesystem path for mounting volumes if volume support is enabled.
replication_namespace = trove.guestagent.strategies.replication.experimental.mariadb_gtid (String) Namespace to load replication strategies from.
replication_strategy = MariaDBGTIDReplication (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.mariadb_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.common.service.DefaultRootController (String) Root controller implementation for mysql.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field.
taskmanager_strategy = trove.common.strategies.cluster.experimental.galera_common.taskmanager.GaleraCommonTaskManagerStrategy (String) Class that implements datastore-specific task manager logic.
tcp_ports = 3306, 4444, 4567, 4568 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
usage_timeout = 400 (Integer) Maximum time (in seconds) to wait for a Guest to become active.
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of MongoDB database configuration options
Configuration option = Default value Description
[mongodb]  
add_members_timeout = 300 (Integer) Maximum time to wait (in seconds) for a replica set initialization process to complete.
api_strategy = trove.common.strategies.cluster.experimental.mongodb.api.MongoDbAPIStrategy (String) Class that implements datastore-specific API logic.
backup_incremental_strategy = {} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.experimental.mongo_impl (String) Namespace to load backup strategies from.
backup_strategy = MongoDump (String) Default strategy to perform backups.
cluster_secure = True (Boolean) Create secure clusters. If False then the Role-Based Access Control will be disabled.
cluster_support = True (Boolean) Enable clusters to be created and managed.
configsvr_port = 27019 (Port number) Port for instances running as config servers.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = (String) List of Guest Logs to expose for publishing.
guestagent_strategy = trove.common.strategies.cluster.experimental.mongodb.guestagent.MongoDbGuestAgentStrategy (String) Class that implements datastore-specific Guest Agent API logic.
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = admin, local, config (List) Databases to exclude when listing databases.
ignore_users = admin.os_admin, admin.root (List) Users to exclude when listing users.
mongodb_port = 27017 (Port number) Port for mongod and mongos instances.
mount_point = /var/lib/mongodb (String) Filesystem path for mounting volumes if volume support is enabled.
num_config_servers_per_cluster = 3 (Integer) The number of config servers to create per cluster.
num_query_routers_per_cluster = 1 (Integer) The number of query routers (mongos) to create per cluster.
replication_strategy = None (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.mongo_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.mongodb.service.MongoDBRootController (String) Root controller implementation for mongodb.
taskmanager_strategy = trove.common.strategies.cluster.experimental.mongodb.taskmanager.MongoDbTaskManagerStrategy (String) Class that implements datastore-specific task manager logic.
tcp_ports = 2500, 27017, 27019 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of MySQL database configuration options
Configuration option = Default value Description
[mysql]  
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.mysql_impl (String) Namespace to load backup strategies from.
backup_strategy = InnoBackupEx (String) Default strategy to perform backups.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = general,slow_query (String) List of Guest Logs to expose for publishing.
guest_log_long_query_time = 1000 (Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = mysql, information_schema, performance_schema (List) Databases to exclude when listing databases.
ignore_users = os_admin, root (List) Users to exclude when listing users.
mount_point = /var/lib/mysql (String) Filesystem path for mounting volumes if volume support is enabled.
replication_namespace = trove.guestagent.strategies.replication.mysql_gtid (String) Namespace to load replication strategies from.
replication_strategy = MysqlGTIDReplication (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.mysql_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.mysql.service.MySQLRootController (String) Root controller implementation for mysql.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field.
tcp_ports = 3306 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
usage_timeout = 400 (Integer) Maximum time (in seconds) to wait for a Guest to become active.
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of Percona XtraDB Cluster database configuration options
Configuration option = Default value Description
[pxc]  
api_strategy = trove.common.strategies.cluster.experimental.galera_common.api.GaleraCommonAPIStrategy (String) Class that implements datastore-specific API logic.
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.mysql_impl (String) Namespace to load backup strategies from.
backup_strategy = InnoBackupEx (String) Default strategy to perform backups.
cluster_support = True (Boolean) Enable clusters to be created and managed.
default_password_length = ${mysql.default_password_length} (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = general,slow_query (String) List of Guest Logs to expose for publishing.
guest_log_long_query_time = 1000 (Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time
guestagent_strategy = trove.common.strategies.cluster.experimental.galera_common.guestagent.GaleraCommonGuestAgentStrategy (String) Class that implements datastore-specific Guest Agent API logic.
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = mysql, information_schema, performance_schema (List) Databases to exclude when listing databases.
ignore_users = os_admin, root, clusterrepuser (List) Users to exclude when listing users.
min_cluster_member_count = 3 (Integer) Minimum number of members in PXC cluster.
mount_point = /var/lib/mysql (String) Filesystem path for mounting volumes if volume support is enabled.
replication_namespace = trove.guestagent.strategies.replication.mysql_gtid (String) Namespace to load replication strategies from.
replication_strategy = MysqlGTIDReplication (String) Default strategy for replication.
replication_user = slave_user (String) Userid for replication slave.
restore_namespace = trove.guestagent.strategies.restore.mysql_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.pxc.service.PxcRootController (String) Root controller implementation for pxc.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field.
taskmanager_strategy = trove.common.strategies.cluster.experimental.galera_common.taskmanager.GaleraCommonTaskManagerStrategy (String) Class that implements datastore-specific task manager logic.
tcp_ports = 3306, 4444, 4567, 4568 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
usage_timeout = 450 (Integer) Maximum time (in seconds) to wait for a Guest to become active.
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of Percona database configuration options
Configuration option = Default value Description
[percona]  
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.mysql_impl (String) Namespace to load backup strategies from.
backup_strategy = InnoBackupEx (String) Default strategy to perform backups.
default_password_length = ${mysql.default_password_length} (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = general,slow_query (String) List of Guest Logs to expose for publishing.
guest_log_long_query_time = 1000 (Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = mysql, information_schema, performance_schema (List) Databases to exclude when listing databases.
ignore_users = os_admin, root (List) Users to exclude when listing users.
mount_point = /var/lib/mysql (String) Filesystem path for mounting volumes if volume support is enabled.
replication_namespace = trove.guestagent.strategies.replication.mysql_gtid (String) Namespace to load replication strategies from.
replication_password = NETOU7897NNLOU (String) Password for replication slave user.
replication_strategy = MysqlGTIDReplication (String) Default strategy for replication.
replication_user = slave_user (String) Userid for replication slave.
restore_namespace = trove.guestagent.strategies.restore.mysql_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.common.service.DefaultRootController (String) Root controller implementation for percona.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field.
tcp_ports = 3306 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
usage_timeout = 450 (Integer) Maximum time (in seconds) to wait for a Guest to become active.
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of PostgreSQL database configuration options
Configuration option = Default value Description
[postgresql]  
backup_incremental_strategy = {'PgBaseBackup': 'PgBaseBackupIncremental'} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.experimental.postgresql_impl (String) Namespace to load backup strategies from.
backup_strategy = PgBaseBackup (String) Default strategy to perform backups.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) No help text available for this option.
guest_log_exposed_logs = general (String) List of Guest Logs to expose for publishing.
guest_log_long_query_time = 0 (Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the ‘general’ log. A value of ‘0’ logs all statements, while ‘-1’ turns off statement logging. Will be replaced by configuration group option: log_min_duration_statement
icmp = False (Boolean) Whether to permit ICMP.
ignore_dbs = os_admin, postgres (List) No help text available for this option.
ignore_users = os_admin, postgres, root (List) No help text available for this option.
mount_point = /var/lib/postgresql (String) Filesystem path for mounting volumes if volume support is enabled.
postgresql_port = 5432 (Port number) The TCP port the server listens on.
replication_namespace = trove.guestagent.strategies.replication.experimental.postgresql_impl (String) Namespace to load replication strategies from.
replication_strategy = PostgresqlReplicationStreaming (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.postgresql_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.postgresql.service.PostgreSQLRootController (String) Root controller implementation for postgresql.
root_on_create = False (Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field.
tcp_ports = 5432 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
wal_archive_location = /mnt/wal_archive (String) Filesystem path storing WAL archive files when WAL-shipping based backups or replication is enabled.
Description of Redis database configuration options
Configuration option = Default value Description
[redis]  
api_strategy = trove.common.strategies.cluster.experimental.redis.api.RedisAPIStrategy (String) Class that implements datastore-specific API logic.
backup_incremental_strategy = {} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = trove.guestagent.strategies.backup.experimental.redis_impl (String) Namespace to load backup strategies from.
backup_strategy = RedisBackup (String) Default strategy to perform backups.
cluster_support = True (Boolean) Enable clusters to be created and managed.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = (String) List of Guest Logs to expose for publishing.
guestagent_strategy = trove.common.strategies.cluster.experimental.redis.guestagent.RedisGuestAgentStrategy (String) Class that implements datastore-specific Guest Agent API logic.
icmp = False (Boolean) Whether to permit ICMP.
mount_point = /var/lib/redis (String) Filesystem path for mounting volumes if volume support is enabled.
replication_namespace = trove.guestagent.strategies.replication.experimental.redis_sync (String) Namespace to load replication strategies from.
replication_strategy = RedisSyncReplication (String) Default strategy for replication.
restore_namespace = trove.guestagent.strategies.restore.experimental.redis_impl (String) Namespace to load restore strategies from.
root_controller = trove.extensions.common.service.DefaultRootController (String) Root controller implementation for redis.
taskmanager_strategy = trove.common.strategies.cluster.experimental.redis.taskmanager.RedisTaskManagerStrategy (String) Class that implements datastore-specific task manager logic.
tcp_ports = 6379, 16379 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
Description of Vertica database configuration options
Configuration option = Default value Description
[vertica]  
api_strategy = trove.common.strategies.cluster.experimental.vertica.api.VerticaAPIStrategy (String) Class that implements datastore-specific API logic.
backup_incremental_strategy = {} (Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = None (String) Namespace to load backup strategies from.
backup_strategy = None (String) Default strategy to perform backups.
cluster_member_count = 3 (Integer) Number of members in Vertica cluster.
cluster_support = True (Boolean) Enable clusters to be created and managed.
default_password_length = 36 (Integer) Character length of generated passwords.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
guest_log_exposed_logs = (String) List of Guest Logs to expose for publishing.
guestagent_strategy = trove.common.strategies.cluster.experimental.vertica.guestagent.VerticaGuestAgentStrategy (String) Class that implements datastore-specific Guest Agent API logic.
icmp = False (Boolean) Whether to permit ICMP.
min_ksafety = 0 (Integer) Minimum k-safety setting permitted for vertica clusters
mount_point = /var/lib/vertica (String) Filesystem path for mounting volumes if volume support is enabled.
readahead_size = 2048 (Integer) Size(MB) to be set as readahead_size for data volume
replication_strategy = None (String) Default strategy for replication.
restore_namespace = None (String) Namespace to load restore strategies from.
root_controller = trove.extensions.vertica.service.VerticaRootController (String) Root controller implementation for Vertica.
taskmanager_strategy = trove.common.strategies.cluster.experimental.vertica.taskmanager.VerticaTaskManagerStrategy (String) Class that implements datastore-specific task manager logic.
tcp_ports = 5433, 5434, 22, 5444, 5450, 4803 (List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
udp_ports = 5433, 4803, 4804, 6453 (List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True).
volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.

Database log files

The corresponding log file of each Database service is stored in the /var/log/trove/ directory of the host on which each service runs.

Log files used by Database services
Log filename Service that logs to the file
trove-api.log Database service API Service
trove-conductor.log Database service Conductor Service
'logfile.txt' Database service guestagent Service
trove-taskmanager.log Database service taskmanager Service

New, updated, and deprecated options in Newton for Database service

New options
Option = default value (Type) Help string
[cassandra] default_password_length = 36 (IntOpt) Character length of generated passwords.
[cassandra] icmp = False (BoolOpt) Whether to permit ICMP.
[cassandra] system_log_level = INFO (StrOpt) Cassandra log verbosity.
[couchbase] default_password_length = 24 (IntOpt) Character length of generated passwords.
[couchbase] icmp = False (BoolOpt) Whether to permit ICMP.
[couchdb] default_password_length = 36 (IntOpt) Character length of generated passwords.
[couchdb] icmp = False (BoolOpt) Whether to permit ICMP.
[db2] default_password_length = 36 (IntOpt) Character length of generated passwords.
[db2] icmp = False (BoolOpt) Whether to permit ICMP.
[mariadb] default_password_length = ${mysql.default_password_length} (IntOpt) Character length of generated passwords.
[mariadb] icmp = False (BoolOpt) Whether to permit ICMP.
[mongodb] default_password_length = 36 (IntOpt) Character length of generated passwords.
[mongodb] icmp = False (BoolOpt) Whether to permit ICMP.
[mysql] default_password_length = 36 (IntOpt) Character length of generated passwords.
[mysql] icmp = False (BoolOpt) Whether to permit ICMP.
[percona] default_password_length = ${mysql.default_password_length} (IntOpt) Character length of generated passwords.
[percona] icmp = False (BoolOpt) Whether to permit ICMP.
[postgresql] default_password_length = 36 (IntOpt) Character length of generated passwords.
[postgresql] icmp = False (BoolOpt) Whether to permit ICMP.
[postgresql] replication_namespace = trove.guestagent.strategies.replication.experimental.postgresql_impl (StrOpt) Namespace to load replication strategies from.
[postgresql] replication_strategy = PostgresqlReplicationStreaming (StrOpt) Default strategy for replication.
[postgresql] wal_archive_location = /mnt/wal_archive (StrOpt) Filesystem path storing WAL archive files when WAL-shipping based backups or replication is enabled.
[pxc] default_password_length = ${mysql.default_password_length} (IntOpt) Character length of generated passwords.
[pxc] icmp = False (BoolOpt) Whether to permit ICMP.
[redis] default_password_length = 36 (IntOpt) Character length of generated passwords.
[redis] icmp = False (BoolOpt) Whether to permit ICMP.
[vertica] default_password_length = 36 (IntOpt) Character length of generated passwords.
[vertica] icmp = False (BoolOpt) Whether to permit ICMP.
New default values
Option Previous default value New default value
[DEFAULT] agent_call_high_timeout 60 600
[DEFAULT] agent_call_low_timeout 5 15
[DEFAULT] dns_auth_url   http://0.0.0.0
[DEFAULT] dns_endpoint_url 0.0.0.0 http://0.0.0.0
[DEFAULT] dns_hostname   localhost
[DEFAULT] dns_management_base_url   http://0.0.0.0
[DEFAULT] max_accepted_volume_size 5 10
[DEFAULT] max_instances_per_tenant 5 10
[DEFAULT] max_volumes_per_tenant 20 40
[DEFAULT] module_types ping ping, new_relic_license
[DEFAULT] resize_time_out 600 900
[DEFAULT] state_change_wait_time 180 600
[DEFAULT] usage_timeout 900 1800
[cassandra] guest_log_exposed_logs   system
[db2] backup_strategy DB2Backup DB2OfflineBackup
[mariadb] backup_incremental_strategy {'InnoBackupEx': 'InnoBackupExIncremental'} {'MariaDBInnoBackupEx': 'MariaDBInnoBackupExIncremental'}
[mariadb] backup_namespace trove.guestagent.strategies.backup.mysql_impl trove.guestagent.strategies.backup.experimental.mariadb_impl
[mariadb] backup_strategy InnoBackupEx MariaDBInnoBackupEx
[mariadb] restore_namespace trove.guestagent.strategies.restore.mysql_impl trove.guestagent.strategies.restore.experimental.mariadb_impl
[mongodb] root_controller trove.extensions.common.service.DefaultRootController trove.extensions.mongodb.service.MongoDBRootController
[mongodb] tcp_ports 2500, 27017 2500, 27017, 27019
[postgresql] backup_incremental_strategy {} {'PgBaseBackup': 'PgBaseBackupIncremental'}
[postgresql] backup_strategy PgDump PgBaseBackup
[postgresql] ignore_dbs postgres os_admin, postgres
[postgresql] root_controller trove.extensions.common.service.DefaultRootController trove.extensions.postgresql.service.PostgreSQLRootController
Deprecated options
Deprecated option New Option
[DEFAULT] default_password_length [couchbase] default_password_length
[DEFAULT] default_password_length [redis] default_password_length
[DEFAULT] default_password_length [cassandra] default_password_length
[DEFAULT] default_password_length [mysql] default_password_length
[DEFAULT] default_password_length [mariadb] default_password_length
[DEFAULT] default_password_length [postgresql] default_password_length
[DEFAULT] default_password_length [vertica] default_password_length
[DEFAULT] default_password_length [pxc] default_password_length
[DEFAULT] default_password_length [percona] default_password_length
[DEFAULT] default_password_length [mongodb] default_password_length
[DEFAULT] default_password_length [db2] default_password_length
[DEFAULT] default_password_length [couchdb] default_password_length
[DEFAULT] use_syslog None

The Database service provides a scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines.

The following tables provide a comprehensive list of the Database service configuration options.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_roles = admin (List) Roles to add to an admin user.
api_paste_config = api-paste.ini (String) File name for the paste.deploy config for trove-api.
bind_host = 0.0.0.0 (IP) IP address the API server will listen on.
bind_port = 8779 (Port number) Port the API server will listen on.
black_list_regex = None (String) Exclude IP addresses that match this regular expression.
db_api_implementation = trove.db.sqlalchemy.api (String) API Implementation for Trove database access.
hostname_require_valid_ip = True (Boolean) Require user hostnames to be valid IP addresses.
http_delete_rate = 200 (Integer) Maximum number of HTTP ‘DELETE’ requests (per minute).
http_get_rate = 200 (Integer) Maximum number of HTTP ‘GET’ requests (per minute).
http_mgmt_post_rate = 200 (Integer) Maximum number of management HTTP ‘POST’ requests (per minute).
http_post_rate = 200 (Integer) Maximum number of HTTP ‘POST’ requests (per minute).
http_put_rate = 200 (Integer) Maximum number of HTTP ‘PUT’ requests (per minute).
injected_config_location = /etc/trove/conf.d (String) Path to folder on the Guest where config files will be injected during instance creation.
instances_page_size = 20 (Integer) Page size for listing instances.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
os_region_name = RegionOne (String) Region name of this node. Used when searching catalog.
region = LOCAL_DEV (String) The region this service is located.
tcp_keepidle = 600 (Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
trove_api_workers = None (Integer) Number of workers for the API service. The default will be the number of CPUs available.
trove_auth_url = http://0.0.0.0:5000/v2.0 (URI) Trove authentication URL.
trove_conductor_workers = None (Integer) Number of workers for the Conductor service. The default will be the number of CPUs available.
trove_security_group_name_prefix = SecGroup (String) Prefix to use when creating Security Groups.
trove_security_group_rule_cidr = 0.0.0.0/0 (String) CIDR to use when creating Security Group Rules.
trove_security_groups_support = True (Boolean) Whether Trove should add Security Groups on create.
users_page_size = 20 (Integer) Page size for listing users.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
Description of backup configuration options
Configuration option = Default value Description
[DEFAULT]  
backup_aes_cbc_key = default_aes_cbc_key (String) Default OpenSSL aes_cbc key.
backup_chunk_size = 65536 (Integer) Chunk size (in bytes) to stream to the Swift container. This should be in multiples of 128 bytes, since this is the size of an md5 digest block allowing the process to update the file checksum during streaming. See: http://stackoverflow.com/questions/1131220/
backup_runner = trove.guestagent.backup.backup_types.InnoBackupEx (String) Runner to use for backups.
backup_runner_options = {} (Dict) Additional options to be passed to the backup runner.
backup_segment_max_size = 2147483648 (Integer) Maximum size (in bytes) of each segment of the backup file.
backup_swift_container = database_backups (String) Swift container to put backups in.
backup_use_gzip_compression = True (Boolean) Compress backups using gzip.
backup_use_openssl_encryption = True (Boolean) Encrypt backups using OpenSSL.
backup_use_snet = False (Boolean) Send backup files over snet.
backups_page_size = 20 (Integer) Page size for listing backups.
Description of clients configuration options
Configuration option = Default value Description
[DEFAULT]  
remote_cinder_client = trove.common.remote.cinder_client (String) Client to send Cinder calls to.
remote_dns_client = trove.common.remote.dns_client (String) Client to send DNS calls to.
remote_guest_client = trove.common.remote.guest_client (String) Client to send Guest Agent calls to.
remote_heat_client = trove.common.remote.heat_client (String) Client to send Heat calls to.
remote_neutron_client = trove.common.remote.neutron_client (String) Client to send Neutron calls to.
remote_nova_client = trove.common.remote.nova_client (String) Client to send Nova calls to.
remote_swift_client = trove.common.remote.swift_client (String) Client to send Swift calls to.
Description of cluster configuration options
Configuration option = Default value Description
[DEFAULT]  
cluster_delete_time_out = 180 (Integer) Maximum time (in seconds) to wait for a cluster delete.
cluster_usage_timeout = 36000 (Integer) Maximum time (in seconds) to wait for a cluster to become active.
clusters_page_size = 20 (Integer) Page size for listing clusters.
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
configurations_page_size = 20 (Integer) Page size for listing configurations.
databases_page_size = 20 (Integer) Page size for listing databases.
default_datastore = None (String) The default datastore id or name to use if one is not provided by the user. If the default value is None, the field becomes required in the instance create request.
default_neutron_networks = (List) List of IDs for management networks which should be attached to the instance regardless of what NICs are specified in the create API call.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
expected_filetype_suffixes = json (List) Filetype endings not to be reattached to an ID by the utils method correct_id_with_req.
format_options = -m 5 (String) Options to use when formatting a volume.
host = 0.0.0.0 (IP) Host to listen for RPC messages.
module_aes_cbc_key = module_aes_cbc_key (String) OpenSSL aes_cbc key for module encryption.
module_types = ping, new_relic_license (List) A list of module types supported. A module type corresponds to the name of a ModuleDriver.
modules_page_size = 20 (Integer) Page size for listing modules.
network_label_regex = ^private$ (String) Regular expression to match Trove network labels.
notification_service_id = {'mongodb': 'c8c907af-7375-456f-b929-b637ff9209ee', 'percona': 'fd1723f5-68d2-409c-994f-a4a197892a17', 'mysql': '2f3ff068-2bfb-4f70-9a9d-a6bb65bc084b', 'pxc': '75a628c3-f81b-4ffb-b10a-4087c26bc854', 'db2': 'e040cd37-263d-4869-aaa6-c62aa97523b5', 'cassandra': '459a230d-4e97-4344-9067-2a54a310b0ed', 'mariadb': '7a4f82cc-10d2-4bc6-aadc-d9aacc2a3cb5', 'postgresql': 'ac277e0d-4f21-40aa-b347-1ea31e571720', 'couchbase': 'fa62fe68-74d9-4779-a24e-36f19602c415', 'couchdb': 'f0a9ab7b-66f7-4352-93d7-071521d44c7c', 'redis': 'b216ffc5-1947-456c-a4cf-70f94c05f7d0', 'vertica': 'a8d805ae-a3b2-c4fd-gb23-b62cee5201ae'} (Dict) Unique ID to tag notification events.
num_tries = 3 (Integer) Number of times to check if a volume exists.
pybasedir = /usr/lib/python/site-packages/trove/trove (String) Directory where the Trove python module is installed.
pydev_path = None (String) Set path to pydevd library, used if pydevd is not found in python sys.path.
quota_notification_interval = 3600 (Integer) Seconds to wait between pushing events.
report_interval = 30 (Integer) The interval (in seconds) which periodic tasks are run.
sql_query_logging = False (Boolean) Allow insecure logging while executing queries through SQLAlchemy.
taskmanager_queue = taskmanager (String) Message queue name the Taskmanager will listen to.
template_path = /etc/trove/templates/ (String) Path which leads to datastore templates.
timeout_wait_for_service = 120 (Integer) Maximum time (in seconds) to wait for a service to become alive.
usage_timeout = 1800 (Integer) Maximum time (in seconds) to wait for a Guest to become active.
Description of Compute configuration options
Configuration option = Default value Description
[DEFAULT]  
ip_regex = None (String) List IP addresses that match this regular expression.
nova_client_version = 2.12 (String) The version of the compute service client.
nova_compute_endpoint_type = publicURL (String) Service endpoint type to use when searching catalog.
nova_compute_service_type = compute (String) Service type to use when searching catalog.
nova_compute_url = None (URI) URL without the tenant segment.
root_grant = ALL (List) Permissions to grant to the ‘root’ user.
root_grant_option = True (Boolean) Assign the ‘root’ user GRANT permissions.
Description of logging configuration options
Configuration option = Default value Description
[DEFAULT]  
backlog = 4096 (Integer) Number of backlog requests to configure the socket with
pydev_debug = disabled (String) Enable or disable pydev remote debugging. If value is ‘auto’ tries to connect to remote debugger server, but in case of error continues running with debugging disabled.
pydev_debug_host = None (String) Pydev debug server host (localhost by default).
pydev_debug_port = 5678 (Port number) Pydev debug server port (5678 by default).
[profiler]  
connection_string = messaging://

(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging.

Examples of possible values:

  • messaging://: use oslo_messaging driver for sending notifications.
enabled = False

(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature).

Possible values:

  • True: Enables the feature
  • False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.
hmac_keys = SECRET_KEY

(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.

Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.

trace_sqlalchemy = False

(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced).

Possible values:

  • True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.
  • False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.
Description of DNS configuration options
Configuration option = Default value Description
[DEFAULT]  
dns_account_id = (String) Tenant ID for DNSaaS.
dns_auth_url = http://0.0.0.0 (URI) Authentication URL for DNSaaS.
dns_domain_id = (String) Domain ID used for adding DNS entries.
dns_domain_name = (String) Domain name used for adding DNS entries.
dns_driver = trove.dns.driver.DnsDriver (String) Driver for DNSaaS.
dns_endpoint_url = http://0.0.0.0 (URI) Endpoint URL for DNSaaS.
dns_hostname = localhost (Hostname) Hostname used for adding DNS entries.
dns_instance_entry_factory = trove.dns.driver.DnsInstanceEntryFactory (String) Factory for adding DNS entries.
dns_management_base_url = http://0.0.0.0 (URI) Management URL for DNSaaS.
dns_passkey = (String) Passkey for DNSaaS.
dns_region = (String) Region name for DNSaaS.
dns_service_type = (String) Service Type for DNSaaS.
dns_time_out = 120 (Integer) Maximum time (in seconds) to wait for a DNS entry add.
dns_ttl = 300 (Integer) Time (in seconds) before a refresh of DNS information occurs.
dns_username = (String) Username for DNSaaS.
trove_dns_support = False (Boolean) Whether Trove should add DNS entries on create (using Designate DNSaaS).
Description of guest agent configuration options
Configuration option = Default value Description
[DEFAULT]  
agent_call_high_timeout = 600 (Integer) Maximum time (in seconds) to wait for Guest Agent ‘slow’ requests (such as restarting the database).
agent_call_low_timeout = 15 (Integer) Maximum time (in seconds) to wait for Guest Agent ‘quick’requests (such as retrieving a list of users or databases).
agent_heartbeat_expiry = 60 (Integer) Time (in seconds) after which a guest is considered unreachable
agent_heartbeat_time = 10 (Integer) Maximum time (in seconds) for the Guest Agent to reply to a heartbeat request.
agent_replication_snapshot_timeout = 36000 (Integer) Maximum time (in seconds) to wait for taking a Guest Agent replication snapshot.
guest_config = /etc/trove/trove-guestagent.conf (String) Path to the Guest Agent config file to be injected during instance creation.
guest_id = None (String) ID of the Guest Instance.
guest_info = guest_info.conf (String) The guest info filename found in the injected config location. If a full path is specified then it will be used as the path to the guest info file
guest_log_container_name = database_logs (String) Name of container that stores guest log components.
guest_log_expiry = 2592000 (Integer) Expiry (in seconds) of objects in guest log container.
guest_log_limit = 1000000 (Integer) Maximum size of a chunk saved in guest log container.
mount_options = defaults,noatime (String) Options to use when mounting a volume.
storage_namespace = trove.common.strategies.storage.swift (String) Namespace to load the default storage strategy from.
storage_strategy = SwiftStorage (String) Default strategy to store backups.
usage_sleep_time = 5 (Integer) Time to sleep during the check for an active Guest.
Description of Orchestration module configuration options
Configuration option = Default value Description
[DEFAULT]  
heat_endpoint_type = publicURL (String) Service endpoint type to use when searching catalog.
heat_service_type = orchestration (String) Service type to use when searching catalog.
heat_time_out = 60 (Integer) Maximum time (in seconds) to wait for a Heat request to complete.
heat_url = None (URI) URL without the tenant segment.
Description of network configuration options
Configuration option = Default value Description
[DEFAULT]  
network_driver = trove.network.nova.NovaNetwork (String) Describes the actual network manager used for the management of network attributes (security groups, floating IPs, etc.).
neutron_endpoint_type = publicURL (String) Service endpoint type to use when searching catalog.
neutron_service_type = network (String) Service type to use when searching catalog.
neutron_url = None (URI) URL without the tenant segment.
Description of nova configuration options
Configuration option = Default value Description
[DEFAULT]  
nova_proxy_admin_pass = (String) Admin password used to connect to Nova.
nova_proxy_admin_tenant_id = (String) Admin tenant ID used to connect to Nova.
nova_proxy_admin_tenant_name = (String) Admin tenant name used to connect to Nova.
nova_proxy_admin_user = (String) Admin username used to connect to Nova.
Description of quota configuration options
Configuration option = Default value Description
[DEFAULT]  
max_accepted_volume_size = 10 (Integer) Default maximum volume size (in GB) for an instance.
max_backups_per_tenant = 50 (Integer) Default maximum number of backups created by a tenant.
max_instances_per_tenant = 10 (Integer) Default maximum number of instances per tenant.
max_volumes_per_tenant = 40 (Integer) Default maximum volume capacity (in GB) spanning across all Trove volumes per tenant.
quota_driver = trove.quota.quota.DbQuotaDriver (String) Default driver to use for quota checks.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of swift configuration options
Configuration option = Default value Description
[DEFAULT]  
swift_endpoint_type = publicURL (String) Service endpoint type to use when searching catalog.
swift_service_type = object-store (String) Service type to use when searching catalog.
swift_url = None (URI) URL ending in AUTH_.
Description of taskmanager configuration options
Configuration option = Default value Description
[DEFAULT]  
cloudinit_location = /etc/trove/cloudinit (String) Path to folder with cloudinit scripts.
datastore_manager = None (String) Manager class in the Guest Agent, set up by the Taskmanager on instance provision.
datastore_registry_ext = {} (Dict) Extension for default datastore managers. Allows the use of custom managers for each of the datastores supported by Trove.
exists_notification_interval = 3600 (Integer) Seconds to wait between pushing events.
exists_notification_transformer = None (String) Transformer for exists notifications.
reboot_time_out = 120 (Integer) Maximum time (in seconds) to wait for a server reboot.
resize_time_out = 900 (Integer) Maximum time (in seconds) to wait for a server resize.
restore_usage_timeout = 36000 (Integer) Maximum time (in seconds) to wait for a Guest instance restored from a backup to become active.
revert_time_out = 600 (Integer) Maximum time (in seconds) to wait for a server resize revert.
server_delete_time_out = 60 (Integer) Maximum time (in seconds) to wait for a server delete.
state_change_poll_time = 3 (Integer) Interval between state change poll requests (seconds).
state_change_wait_time = 600 (Integer) Maximum time (in seconds) to wait for a state change.
update_status_on_fail = True (Boolean) Set the service and instance task statuses to ERROR when an instance fails to become active within the configured usage_timeout.
usage_sleep_time = 5 (Integer) Time to sleep during the check for an active Guest.
use_heat = False (Boolean) Use Heat for provisioning.
use_nova_server_config_drive = True (Boolean) Use config drive for file injection when booting instance.
use_nova_server_volume = False (Boolean) Whether to provision a Cinder volume for the Nova instance.
verify_swift_checksum_on_restore = True (Boolean) Enable verification of Swift checksum before starting restore. Makes sure the checksum of original backup matches the checksum of the Swift backup file.
Description of upgrades configuration options
Configuration option = Default value Description
[upgrade_levels]  
conductor = icehouse (String) Set a version cap for messages sent to conductor services
guestagent = icehouse (String) Set a version cap for messages sent to guestagent services
taskmanager = icehouse (String) Set a version cap for messages sent to taskmanager services
Description of volume configuration options
Configuration option = Default value Description
[DEFAULT]  
block_device_mapping = vdb (String) Block device to map onto the created instance.
cinder_endpoint_type = publicURL (String) Service endpoint type to use when searching catalog.
cinder_service_type = volumev2 (String) Service type to use when searching catalog.
cinder_url = None (URI) URL without the tenant segment.
cinder_volume_type = None (String) Volume type to use when provisioning a Cinder volume.
device_path = /dev/vdb (String) Device path for volume if volume support is enabled.
trove_volume_support = True (Boolean) Whether to provision a Cinder volume for datadir.
volume_format_timeout = 120 (Integer) Maximum time (in seconds) to wait for a volume format.
volume_fstype = ext3 (String) File system type used to format a volume.
volume_time_out = 60 (Integer) Maximum time (in seconds) to wait for a volume attach.

Identity service

Identity API configuration

Configuration options

The Identity API can be configured by changing the following options:

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_endpoint = None (String) The base admin endpoint URL for Keystone that is advertised to clients (NOTE: this does NOT affect how Keystone listens for connections). Defaults to the base host URL of the request. For example, if keystone receives a request to http://server:35357/v3/users, then this will option will be automatically treated as http://server:35357. You should only need to set option if either the value of the base URL contains a path that keystone does not automatically infer (/prefix/v3), or if the endpoint should be found on a different host.
admin_token = None (String) Using this feature is NOT recommended. Instead, use the keystone-manage bootstrap command. The value of this option is treated as a “shared secret” that can be used to bootstrap Keystone through the API. This “token” does not represent a user (it has no identity), and carries no explicit authorization (it effectively bypasses most authorization checks). If set to None, the value is ignored and the admin_token middleware is effectively disabled. However, to completely disable admin_token in production (highly recommended, as it presents a security risk), remove AdminTokenAuthMiddleware (the admin_token_auth filter) from your paste application pipelines (for example, in keystone-paste.ini).
domain_id_immutable = True (Boolean) DEPRECATED: Set this to false if you want to enable the ability for user, group and project entities to be moved between domains by updating their domain_id attribute. Allowing such movement is not recommended if the scope of a domain admin is being restricted by use of an appropriate policy file (see etc/policy.v3cloudsample.json as an example). This feature is deprecated and will be removed in a future release, in favor of strictly immutable domain IDs. The option to set domain_id_immutable to false has been deprecated in the M release and will be removed in the O release.
list_limit = None (Integer) The maximum number of entities that will be returned in a collection. This global limit may be then overridden for a specific driver, by specifying a list_limit in the appropriate section (for example, [assignment]). No limit is set by default. In larger deployments, it is recommended that you set this to a reasonable number to prevent operations like listing all users and projects from placing an unnecessary load on the system.
max_param_size = 64 (Integer) Limit the sizes of user & project ID/names.
max_project_tree_depth = 5 (Integer) Maximum depth of the project hierarchy, excluding the project acting as a domain at the top of the hierarchy. WARNING: Setting it to a large value may adversely impact performance.
max_token_size = 8192 (Integer) Similar to [DEFAULT] max_param_size, but provides an exception for token values. With PKI / PKIZ tokens, this needs to be set close to 8192 (any higher, and other HTTP implementations may break), depending on the size of your service catalog and other factors. With Fernet tokens, this can be set as low as 255. With UUID tokens, this should be set to 32).
member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab (String) Similar to the [DEFAULT] member_role_name option, this represents the default role ID used to associate users with their default projects in the v2 API. This will be used as the explicit role where one is not specified by the v2 API. You do not need to set this value unless you want keystone to use an existing role with a different ID, other than the arbitrarily defined _member_ role (in which case, you should set [DEFAULT] member_role_name as well).
member_role_name = _member_ (String) This is the role name used in combination with the [DEFAULT] member_role_id option; see that option for more detail. You do not need to set this option unless you want keystone to use an existing role (in which case, you should set [DEFAULT] member_role_id as well).
public_endpoint = None (String) The base public endpoint URL for Keystone that is advertised to clients (NOTE: this does NOT affect how Keystone listens for connections). Defaults to the base host URL of the request. For example, if keystone receives a request to http://server:5000/v3/users, then this will option will be automatically treated as http://server:5000. You should only need to set option if either the value of the base URL contains a path that keystone does not automatically infer (/prefix/v3), or if the endpoint should be found on a different host.
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO (String) DEPRECATED: The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. This option has been deprecated in the N release and will be removed in the P release. Use oslo.middleware.http_proxy_to_wsgi configuration instead.
strict_password_check = False (Boolean) If set to true, strict password length checking is performed for password manipulation. If a password exceeds the maximum length, the operation will fail with an HTTP 403 Forbidden error. If set to false, passwords are automatically truncated to the maximum length.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.

Token provider

OpenStack Identity supports customizable token providers. This is specified in the [token] section of the configuration file. The token provider controls the token construction, validation, and revocation operations.

You can register your own token provider by configuring the following property:

Note

More commonly, you can use this option to change the token provider to one of the ones built in. Alternatively, you can use it to configure your own token provider.

  • provider - token provider driver. Defaults to uuid. Implemented by keystone.token.providers.uuid.Provider. This is the entry point for the token provider in the keystone.token.provider namespace.

Each token format uses different technologies to achieve various performance, scaling, and architectural requirements. The Identity service includes fernet, pkiz, pki, and uuid token providers.

Below is the detailed list of the token formats:

UUID
uuid tokens must be persisted (using the back end specified in the [token] driver option), but do not require any extra configuration or setup.
PKI and PKIZ
pki and pkiz tokens can be validated offline, without making HTTP calls to keystone. However, this format requires that certificates be installed and distributed to facilitate signing tokens and later validating those signatures.
Fernet
fernet tokens do not need to be persisted at all, but require that you run keystone-manage fernet_setup (also see the keystone-manage fernet_rotate command).

Warning

UUID, PKI, PKIZ, and Fernet tokens are all bearer tokens. They must be protected from unnecessary disclosure to prevent unauthorized access.

Federated Identity

You can use federation for the Identity service (keystone) in two ways:

  • Supporting keystone as a SP: consuming identity assertions issued by an external Identity Provider, such as SAML assertions or OpenID Connect claims.

  • Supporting keystone as an IdP: fulfilling authentication requests on behalf of Service Providers.

    Note

    It is also possible to have one keystone act as an SP that consumes Identity from another keystone acting as an IdP.

There is currently support for two major federation protocols:

_images/keystone-federation.png

Keystone federation

To enable federation:

  1. Run keystone under Apache. See Configure the Apache HTTP server for more information.

    Note

    Other application servers, such as nginx, have support for federation extensions that may work but are not tested by the community.

  2. Configure Apache to use a federation capable module. We recommend Shibboleth, see the Shibboleth documentation for more information.

    Note

    Another option is mod_auth_melon, see the mod’s github repo for more information.

  3. Configure federation in keystone.

Note

The external IdP is responsible for authenticating users and communicates the result of authentication to keystone using authentication assertions. Keystone maps these values to keystone user groups and assignments created in keystone.

Supporting keystone as a SP

To have keystone as an SP, you will need to configure keystone to accept assertions from external IdPs. Examples of external IdPs are:

  • ADFS
  • FreeIPA
  • Tivoli Access Manager
  • Keystone
Configuring federation in keystone
  1. Configure authentication drivers in keystone.conf by adding the authentication methods to the [auth] section in keystone.conf. Ensure the names are the same as to the protocol names added via Identity API v3.

    For example:

    [auth]
    methods = external,password,token,mapped,openid
    

    Note

    mapped and openid are the federation specific drivers. The other names in the example are not related to federation.

  2. Create local keystone groups and assign roles.

    Important

    The keystone requires group-based role assignments to authorize federated users. The federation mapping engine maps federated users into local user groups, which are the actors in keystone’s role assignments.

  3. Create an IdP object in keystone. The object must represent the IdP you will use to authenticate end users:

    PUT /OS-FEDERATION/identity_providers/{idp_id}
    

    More configuration information for IdPs can be found http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#register-an-identity-provider.

  4. Add mapping rules:

    PUT /OS-FEDERATION/mappings/{mapping_id}
    

    More configuration information for mapping rules can be found http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#create-a-mapping.

    Note

    The only keystone API objects that support mapping are groups and users.

  5. Add a protocol object and specify the mapping ID you want to use with the combination of the IdP and protocol:

    PUT /OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}
    

    More configuration information for protocols can be found http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#add-a-protocol-and-attribute-mapping-to-an-identity-provider.

Performing federated authentication
  1. Authenticate externally and generate an unscoped token in keystone:

    Note

    Unlike other authentication methods in keystone, the user does not issue an HTTP POST request with authentication data in the request body. To start federated authentication a user must access the dedicated URL with IdP’s and orotocol’s identifiers stored within a protected URL. The URL has a format of: /v3/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}/auth.

    GET/POST /OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth
    
  2. Determine accessible resources. By using the previously returned token, the user can issue requests to the list projects and domains that are accessible.

    • List projects a federated user can access: GET /OS-FEDERATION/projects
    • List domains a federated user can access: GET /OS-FEDERATION/domains
    GET /OS-FEDERATION/projects
    
  3. Get a scoped token. A federated user can request a scoped token using the unscoped token. A project or domain can be specified by either ID or name. An ID is sufficient to uniquely identify a project or domain.

    POST /auth/tokens
    
Supporting keystone as an IdP

When acting as an IdP, the primary role of keystone is to issue assertions about users owned by keystone. This is done using PySAML2.

Configuring federation in keystone

There are certain settings in keystone.conf that must be set up, prior to attempting to federate multiple keystone deployments.

  1. Within keystone.conf, assign values to the [saml] related fields, for example:

    [saml]
    certfile=/etc/keystone/ssl/certs/ca.pem
    keyfile=/etc/keystone/ssl/private/cakey.pem
    idp_entity_id=https://keystone.example.com/v3/OS-FEDERATION/saml2/idp
    idp_sso_endpoint=https://keystone.example.com/v3/OS-FEDERATION/saml2/sso
    idp_metadata_path=/etc/keystone/saml2_idp_metadata.xml
    
  2. We recommend the following Organization configuration options. Ensure these values contain not special characters that may cause problems as part of a URL:

    idp_organization_name=example_company
    idp_organization_display_name=Example Corp.
    idp_organization_url=example.com
    
  3. As with the Organization options, the Contact options are not necessary, but it is advisable to set these values:

    idp_contact_company=example_company
    idp_contact_name=John
    idp_contact_surname=Smith
    idp_contact_email=jsmith@example.com
    idp_contact_telephone=555-55-5555
    idp_contact_type=technical
    
Generate metadata

Metadata must be exchanged to create a trust between the IdP and the SP.

  1. Create metadata for your keystone IdP, run the keystone-manage command and pipe the output to a file. For example:

    $ keystone-manage saml_idp_metadata > /etc/keystone/saml2_idp_metadata.xml
    

    Note

    The file location must match the value of the idp_metadata_path configuration option assigned previously.

Create a SP

To setup keystone-as-a-Service-Provider properly, you will need to understand what protocols are supported by external IdPs. For example, keystone as an SP can allow identities to federate in from a ADFS IdP but it must be configured to understand the SAML v2.0 protocol. ADFS issues assertions using SAML v2.0. Some examples of federated protocols include:

  • SAML v2.0
  • OpenID Connect

The following instructions are an example of how you can configure keystone as an SP.

  1. Create a new SP with an ID of BETA.

  2. Create a sp_url of http://beta.example.com/Shibboleth.sso/SAML2/ECP.

  3. Create a auth_url of http://beta.example.com:5000/v3/OS-FEDERATION/identity_providers/beta/protocols/saml2/auth.

    Note

    Use the sp_url when creating a SAML assertion for BETA and signed by the current keystone IdP. Use the auth_url when retrieving the token for BETA once the SAML assertion is sent.

  4. Set the enabled field to true. It is set to false by default.

  5. Your output should reflect the following example:

     $ curl -s -X PUT \
    -H "X-Auth-Token: $OS_TOKEN" \
    -H "Content-Type: application/json" \
    -d '{"service_provider": {"auth_url": "http://beta.example.com:5000/v3/OS-FEDERATION/identity_providers/beta/protocols/saml2/auth", "sp_url": "https://example.com:5000/Shibboleth.sso/SAML2/ECP", "enabled": true}}' \
    http://localhost:5000/v3/OS-FEDERATION/service_providers/BETA | python -mjson.tool
    
keystone-to-keystone

Keystone acting as an IdP is known as k2k or k2k federation, where a keystone somewhere is acting as the SP and another keystone is acting as the IdP. All IdPs issue assertions about the identities it owns using a Protocol.

Mapping rules

Mapping adds a set of rules to map federation attributes to keystone users or groups. An IdP has exactly one mapping specified per protocol.

A mapping is a translation between assertions provided from an IdP and the permission and roles applied by an SP. Given an assertion from an IdP, an SP applies a mapping to translate attributes from the IdP to known roles. A mapping is typically owned by an SP.

Mapping objects can be used multiple times by different combinations of IdP and protocol.

A rule hierarchy is as follows:

{
     "rules": [
        {
            "local": [
               {
                    "<user> or <group>"
                }
            ],
            "remote": [
                {
                    "<condition>"
                }
            ]
        }
    ]
}
  • rules: top-level list of rules.
  • local: a rule containing information on what local attributes will be mapped.
  • remote: a rule containing information on what remote attributes will be mapped.
  • condition: contains information on conditions that allow a rule, can only be set in a remote rule.

For more information on mapping rules, see http://docs.openstack.org/developer/keystone/federation/federated_identity.html#mapping-rules.

Mapping creation

Mapping creation starts with the communication between the IdP and SP. The IdP usually provides a set of assertions that their users have in their assertion document. The SP will have to map those assertions to known groups and roles. For example:

Identity Provider 1:
  name: jsmith
  groups: hacker
  other: <assertion information>
The Service Provider may have 3 groups:
  Admin Group
  Developer Group
  User Group

The mapping created by the Service Provider might look like:
  Local:
  Group: Developer Group
Remote:
  Groups: hackers

The Developer Group may have a role assignment on the Developer Project. When jsmith authenticates against IdP 1, it presents that assertion to the SP.The SP maps the jsmith user to the Developer Group because the assertion says jsmith is a member of the hacker group.

Mapping examples

A bare bones mapping is sufficient if you would like all federated users to have the same authorization in the SP cloud. However, mapping is quite powerful and flexible. You can map different remote users into different user groups in keystone, limited only by the number of assertions your IdP makes about each user.

A mapping is composed of a list of rules, and each rule is further composed of a list of remote attributes and a list of local attributes. If a rule is matched, all of the local attributes are applied in the SP. For a rule to match, all of the remote attributes it defines must match.

In the base case, a federated user simply needs an assertion containing an email address to be identified in the SP cloud. To achieve that, only one rule is needed that requires the presence of one remote attribute:

{
    "rules": [
        {
            "remote": [
                {
                    "type": "Email"
                }
            ],
            "local": [
                {
                    "user": {
                        "name": "{0}"
                    }
                }
            ]
        }
    ]
}

However, that is not particularly useful as the federated user would receive no authorization. To rectify it, you can map all federated users with email addresses into a federated-users group in the default domain. All federated users will then be able to consume whatever role assignments that user group has already received in keystone:

Note

In this example, there is only one rule requiring one remote attribute.

{
    "rules": [
        {
            "remote": [
                {
                    "type": "Email"
                }
            ],
            "local": [
                {
                    "user": {
                        "name": "{0}"
                    }
                },
                {
                    "group": {
                        "domain": {
                            "id": "0cd5e9"
                        },
                        "name": "federated-users"
                    }
                }
            ]
        }
    ]
}

This example can be expanded by adding a second rule that conveys additional authorization to only a subset of federated users. Federated users with a title attribute that matches either Manager or Supervisor are granted the hypothetical observer role, which would allow them to perform any read-only API call in the cloud:

{
    "rules": [
        {
            "remote": [
                {
                    "type": "Email"
                },
            ],
            "local": [
                {
                    "user": {
                        "name": "{0}"
                    }
                },
                {
                    "group": {
                        "domain": {
                            "id": "default"
                        },
                        "name": "federated-users"
                    }
                }
            ]
        },
        {
            "remote": [
                {
                    "type": "Title",
                    "any_one_of": [".*Manager$", "Supervisor"],
                    "regex": "true"
                },
            ],
            "local": [
                {
                    "group": {
                        "domain": {
                            "id": "default"
                        },
                        "name": "observers"
                    }
                }
            ]
        }
    ]
}

Note

any_one_of and regex in the rule above map federated users into the observers group when a user’s Title assertion matches any of the regular expressions specified in the any_one_of attribute.

Keystone also supports the following:

  • not_any_of, matches any assertion that does not include one of the specified values
  • blacklist, matches all assertions of the specified type except those included in the specified value
  • whitelist does not match any assertion except those listed in the specified value.

Additional configuration options for Identity service

The Identity service is configured in the /etc/keystone/keystone.conf file.

The following tables provide a comprehensive list of the Identity service options.

Description of assignment configuration options
Configuration option = Default value Description
[assignment]  
driver = None (String) Entrypoint for the assignment backend driver in the keystone.assignment namespace. Only an SQL driver is supplied. If an assignment driver is not specified, the identity driver will choose the assignment driver (driver selection based on [identity]/driver option is deprecated and will be removed in the “O” release).
prohibited_implied_role = admin (List) A list of role names which are prohibited from being an implied role.
Description of authorization configuration options
Configuration option = Default value Description
[auth]  
external = None (String) Entrypoint for the external (REMOTE_USER) auth plugin module in the keystone.auth.external namespace. Supplied drivers are DefaultDomain and Domain. The default driver is DefaultDomain.
methods = external, password, token, oauth1 (List) Allowed authentication methods.
oauth1 = None (String) Entrypoint for the oAuth1.0 auth plugin module in the keystone.auth.oauth1 namespace.
password = None (String) Entrypoint for the password auth plugin module in the keystone.auth.password namespace.
token = None (String) Entrypoint for the token auth plugin module in the keystone.auth.token namespace.
Description of CA and SSL configuration options
Configuration option = Default value Description
[eventlet_server_ssl]  
ca_certs = /etc/keystone/ssl/certs/ca.pem (String) DEPRECATED: Path of the CA cert file for SSL.
cert_required = False (Boolean) DEPRECATED: Require client certificate.
certfile = /etc/keystone/ssl/certs/keystone.pem (String) DEPRECATED: Path of the certfile for SSL. For non-production environments, you may be interested in using keystone-manage ssl_setup to generate self-signed certificates.
enable = False (Boolean) DEPRECATED: Toggle for SSL support on the Keystone eventlet servers.
keyfile = /etc/keystone/ssl/private/keystonekey.pem (String) DEPRECATED: Path of the keyfile for SSL.
[signing]  
ca_certs = /etc/keystone/ssl/certs/ca.pem (String) DEPRECATED: Path of the CA for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
ca_key = /etc/keystone/ssl/private/cakey.pem (String) DEPRECATED: Path of the CA key for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com (String) DEPRECATED: Certificate subject (auto generated certificate) for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
certfile = /etc/keystone/ssl/certs/signing_cert.pem (String) DEPRECATED: Path of the certfile for token signing. For non-production environments, you may be interested in using keystone-manage pki_setup to generate self-signed certificates. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
key_size = 2048 (Integer) DEPRECATED: Key size (in bits) for token signing cert (auto generated certificate). PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
keyfile = /etc/keystone/ssl/private/signing_key.pem (String) DEPRECATED: Path of the keyfile for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
valid_days = 3650 (Integer) DEPRECATED: Days the token signing cert is valid for (auto generated certificate). PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
[ssl]  
ca_key = /etc/keystone/ssl/private/cakey.pem (String) Path of the CA key file for SSL.
cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=localhost (String) SSL certificate subject (auto generated certificate).
key_size = 1024 (Integer) SSL key length (in bits) (auto generated certificate).
valid_days = 3650 (Integer) Days the certificate is valid for once signed (auto generated certificate).
Description of catalog configuration options
Configuration option = Default value Description
[catalog]  
cache_time = None (Integer) Time to cache catalog data (in seconds). This has no effect unless global and catalog caching are enabled.
caching = True (Boolean) Toggle for catalog caching. This has no effect unless global caching is enabled.
driver = sql (String) Entrypoint for the catalog backend driver in the keystone.catalog namespace. Supplied drivers are kvs, sql, templated, and endpoint_filter.sql
list_limit = None (Integer) Maximum number of entities that will be returned in a catalog collection.
template_file = default_catalog.templates (String) Catalog template file name for use with the template catalog backend.
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
insecure_debug = False (Boolean) If set to true, then the server will return information in HTTP responses that may allow an unauthenticated or authenticated user to get more information than normal, such as additional details about why authentication failed. This may be useful for debugging but is insecure.
Description of credential configuration options
Configuration option = Default value Description
[credential]  
driver = sql (String) Entrypoint for the credential backend driver in the keystone.credential namespace.
Description of logging configuration options
Configuration option = Default value Description
[audit]  
namespace = openstack (String) namespace prefix for generated id
Description of domain configuration options
Configuration option = Default value Description
[domain_config]  
cache_time = 300 (Integer) TTL (in seconds) to cache domain config data. This has no effect unless domain config caching is enabled.
caching = True (Boolean) Toggle for domain config caching. This has no effect unless global caching is enabled.
driver = sql (String) Entrypoint for the domain config backend driver in the keystone.resource.domain_config namespace.
Description of federation configuration options
Configuration option = Default value Description
[federation]  
assertion_prefix = (String) Value to be used when filtering assertion parameters from the environment.
driver = sql (String) Entrypoint for the federation backend driver in the keystone.federation namespace.
federated_domain_name = Federated (String) A domain name that is reserved to allow federated ephemeral users to have a domain concept. Note that an admin will not be able to create a domain with this name or update an existing domain to this name. You are not advised to change this value unless you really have to.
remote_id_attribute = None (String) Value to be used to obtain the entity ID of the Identity Provider from the environment (e.g. if using the mod_shib plugin this value is Shib-Identity-Provider).
sso_callback_template = /etc/keystone/sso_callback_template.html (String) Location of Single Sign-On callback handler, will return a token to a trusted dashboard host.
trusted_dashboard = [] (Multi-valued) A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of the trusted_dashboard list. This configuration option may be repeated for multiple values. For example: trusted_dashboard=http://acme.com/auth/websso trusted_dashboard=http://beta.com/auth/websso
Description of Fernet tokens configuration options
Configuration option = Default value Description
[fernet_tokens]  
key_repository = /etc/keystone/fernet-keys/ (String) Directory containing Fernet token keys.
max_active_keys = 3 (Integer) This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key, one primary key, and one secondary key. Increasing this value means that additional secondary keys will be kept in the rotation.
Description of identity configuration options
Configuration option = Default value Description
[identity]  
cache_time = 600 (Integer) Time to cache identity data (in seconds). This has no effect unless global and identity caching are enabled.
caching = True (Boolean) Toggle for identity caching. This has no effect unless global caching is enabled.
default_domain_id = default (String) This references the domain to use for all Identity API v2 requests (which are not aware of domains). A domain with this ID will be created for you by keystone-manage db_sync in migration 008. The domain referenced by this ID cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. There is nothing special about this domain, other than the fact that it must exist to order to maintain support for your v2 clients.
domain_config_dir = /etc/keystone/domains (String) Path for Keystone to locate the domain specific identity configuration files if domain_specific_drivers_enabled is set to true.
domain_configurations_from_database = False (Boolean) Extract the domain specific configuration options from the resource backend where they have been stored with the domain data. This feature is disabled by default (in which case the domain specific options will be loaded from files in the domain configuration directory); set to true to enable.
domain_specific_drivers_enabled = False (Boolean) A subset (or all) of domains can have their own identity driver, each with their own partial configuration options, stored in either the resource backend or in a file in a domain configuration directory (depending on the setting of domain_configurations_from_database). Only values specific to the domain need to be specified in this manner. This feature is disabled by default; set to true to enable.
driver = sql (String) Entrypoint for the identity backend driver in the keystone.identity namespace. Supplied drivers are ldap and sql.
list_limit = None (Integer) Maximum number of entities that will be returned in an identity collection.
max_password_length = 4096 (Integer) Maximum supported length for user passwords; decrease to improve performance.
Description of KVS configuration options
Configuration option = Default value Description
[kvs]  
backends = (List) Extra dogpile.cache backend modules to register with the dogpile.cache library.
config_prefix = keystone.kvs (String) Prefix for building the configuration dictionary for the KVS region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name.
default_lock_timeout = 5 (Integer) Default lock timeout (in seconds) for distributed locking.
enable_key_mangler = True (Boolean) Toggle to disable using a key-mangling function to ensure fixed length keys. This is toggle-able for debugging purposes, it is highly recommended to always leave this set to true.
Description of LDAP configuration options
Configuration option = Default value Description
[ldap]  
alias_dereferencing = default (String) The LDAP dereferencing option for queries. The “default” option falls back to using default dereferencing configured by your ldap.conf.
allow_subtree_delete = False (Boolean) Delete subtrees using the subtree delete control. Only enable this option if your LDAP server supports subtree deletion.
auth_pool_connection_lifetime = 60 (Integer) End user auth connection lifetime in seconds.
auth_pool_size = 100 (Integer) End user auth connection pool size.
chase_referrals = None (Boolean) Override the system’s default referral chasing behavior for queries.
debug_level = None (Integer) Sets the LDAP debugging level for LDAP calls. A value of 0 means that debugging is not enabled. This value is a bitmask, consult your LDAP documentation for possible values.
dumb_member = cn=dumb,dc=nonexistent (String) DN of the “dummy member” to use when “use_dumb_member” is enabled.
group_additional_attribute_mapping = (List) Additional attribute mappings for groups. Attribute mapping format is <ldap_attr>:<user_attr>, where ldap_attr is the attribute in the LDAP entry and user_attr is the Identity API attribute.
group_allow_create = True (Boolean) DEPRECATED: Allow group creation in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release.
group_allow_delete = True (Boolean) DEPRECATED: Allow group deletion in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release.
group_allow_update = True (Boolean) DEPRECATED: Allow group update in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release.
group_attribute_ignore = (List) List of attributes stripped off the group on update.
group_desc_attribute = description (String) LDAP attribute mapped to group description.
group_filter = None (String) LDAP search filter for groups.
group_id_attribute = cn (String) LDAP attribute mapped to group id.
group_member_attribute = member (String) LDAP attribute mapped to show group membership.
group_members_are_ids = False (Boolean) If the members of the group objectclass are user IDs rather than DNs, set this to true. This is the case when using posixGroup as the group objectclass and OpenDirectory.
group_name_attribute = ou (String) LDAP attribute mapped to group name.
group_objectclass = groupOfNames (String) LDAP objectclass for groups.
group_tree_dn = None (String) Search base for groups. Defaults to the suffix value.
page_size = 0 (Integer) Maximum results per page; a value of zero (“0”) disables paging.
password = None (String) Password for the BindDN to query the LDAP server.
pool_connection_lifetime = 600 (Integer) Connection lifetime in seconds.
pool_connection_timeout = -1 (Integer) Connector timeout in seconds. Value -1 indicates indefinite wait for response.
pool_retry_delay = 0.1 (Floating point) Time span in seconds to wait between two reconnect trials.
pool_retry_max = 3 (Integer) Maximum count of reconnect trials.
pool_size = 10 (Integer) Connection pool size.
query_scope = one (String) The LDAP scope for queries, “one” represents oneLevel/singleLevel and “sub” represents subtree/wholeSubtree options.
suffix = cn=example,cn=com (String) LDAP server suffix
tls_cacertdir = None (String) CA certificate directory path for communicating with LDAP servers.
tls_cacertfile = None (String) CA certificate file path for communicating with LDAP servers.
tls_req_cert = demand (String) Specifies what checks to perform on client certificates in an incoming TLS session.
url = ldap://localhost (String) URL(s) for connecting to the LDAP server. Multiple LDAP URLs may be specified as a comma separated string. The first URL to successfully bind is used for the connection.
use_auth_pool = True (Boolean) Enable LDAP connection pooling for end user authentication. If use_pool is disabled, then this setting is meaningless and is not used at all.
use_dumb_member = False (Boolean) If true, will add a dummy member to groups. This is required if the objectclass for groups requires the “member” attribute.
use_pool = True (Boolean) Enable LDAP connection pooling.
use_tls = False (Boolean) Enable TLS for communicating with LDAP servers.
user = None (String) User BindDN to query the LDAP server.
user_additional_attribute_mapping = (List) List of additional LDAP attributes used for mapping additional attribute mappings for users. Attribute mapping format is <ldap_attr>:<user_attr>, where ldap_attr is the attribute in the LDAP entry and user_attr is the Identity API attribute.
user_allow_create = True (Boolean) DEPRECATED: Allow user creation in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release.
user_allow_delete = True (Boolean) DEPRECATED: Allow user deletion in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release.
user_allow_update = True (Boolean) DEPRECATED: Allow user updates in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release.
user_attribute_ignore = default_project_id (List) List of attributes stripped off the user on update.
user_default_project_id_attribute = None (String) LDAP attribute mapped to default_project_id for users.
user_description_attribute = description (String) LDAP attribute mapped to user description.
user_enabled_attribute = enabled (String) LDAP attribute mapped to user enabled flag.
user_enabled_default = True (String) Default value to enable users. This should match an appropriate int value if the LDAP server uses non-boolean (bitmask) values to indicate if a user is enabled or disabled. If this is not set to “True” the typical value is “512”. This is typically used when “user_enabled_attribute = userAccountControl”.
user_enabled_emulation = False (Boolean) If true, Keystone uses an alternative method to determine if a user is enabled or not by checking if they are a member of the “user_enabled_emulation_dn” group.
user_enabled_emulation_dn = None (String) DN of the group entry to hold enabled users when using enabled emulation.
user_enabled_emulation_use_group_config = False (Boolean) Use the “group_member_attribute” and “group_objectclass” settings to determine membership in the emulated enabled group.
user_enabled_invert = False (Boolean) Invert the meaning of the boolean enabled values. Some LDAP servers use a boolean lock attribute where “true” means an account is disabled. Setting “user_enabled_invert = true” will allow these lock attributes to be used. This setting will have no effect if “user_enabled_mask” or “user_enabled_emulation” settings are in use.
user_enabled_mask = 0 (Integer) Bitmask integer to indicate the bit that the enabled value is stored in if the LDAP server represents “enabled” as a bit on an integer rather than a boolean. A value of “0” indicates the mask is not used. If this is not set to “0” the typical value is “2”. This is typically used when “user_enabled_attribute = userAccountControl”.
user_filter = None (String) LDAP search filter for users.
user_id_attribute = cn (String) LDAP attribute mapped to user id. WARNING: must not be a multivalued attribute.
user_mail_attribute = mail (String) LDAP attribute mapped to user email.
user_name_attribute = sn (String) LDAP attribute mapped to user name.
user_objectclass = inetOrgPerson (String) LDAP objectclass for users.
user_pass_attribute = userPassword (String) LDAP attribute mapped to password.
user_tree_dn = None (String) Search base for users. Defaults to the suffix value.
Description of mapping configuration options
Configuration option = Default value Description
[identity_mapping]  
backward_compatible_ids = True (Boolean) The format of user and group IDs changed in Juno for backends that do not generate UUIDs (e.g. LDAP), with keystone providing a hash mapping to the underlying attribute in LDAP. By default this mapping is disabled, which ensures that existing IDs will not change. Even when the mapping is enabled by using domain specific drivers, any users and groups from the default domain being handled by LDAP will still not be mapped to ensure their IDs remain backward compatible. Setting this value to False will enable the mapping for even the default LDAP driver. It is only safe to do this if you do not already have assignments for users and groups from the default LDAP domain, and it is acceptable for Keystone to provide the different IDs to clients than it did previously. Typically this means that the only time you can set this value to False is when configuring a fresh installation.
driver = sql (String) Entrypoint for the identity mapping backend driver in the keystone.identity.id_mapping namespace.
generator = sha256 (String) Entrypoint for the public ID generator for user and group entities in the keystone.identity.id_generator namespace. The Keystone identity mapper only supports generators that produce no more than 64 characters.
Description of memcache configuration options
Configuration option = Default value Description
[memcache]  
servers = localhost:11211 (List) Memcache servers in the format of “host:port”.
socket_timeout = 3 (Integer) Timeout in seconds for every call to a server. This is used by the key value store system (e.g. token pooled memcached persistence backend).
Description of OAuth configuration options
Configuration option = Default value Description
[oauth1]  
access_token_duration = 86400 (Integer) Duration (in seconds) for the OAuth Access Token.
driver = sql (String) Entrypoint for the OAuth backend driver in the keystone.oauth1 namespace.
request_token_duration = 28800 (Integer) Duration (in seconds) for the OAuth Request Token.
Description of os_inherit configuration options
Configuration option = Default value Description
[os_inherit]  
enabled = True (Boolean) DEPRECATED: role-assignment inheritance to projects from owning domain or from projects higher in the hierarchy can be optionally disabled. In the future, this option will be removed and the hierarchy will be always enabled. The option to enable the OS-INHERIT extension has been deprecated in the M release and will be removed in the O release. The OS-INHERIT extension will be enabled by default.
Description of policy configuration options
Configuration option = Default value Description
[policy]  
driver = sql (String) Entrypoint for the policy backend driver in the keystone.policy namespace. Supplied drivers are rules and sql.
list_limit = None (Integer) Maximum number of entities that will be returned in a policy collection.
Description of revoke configuration options
Configuration option = Default value Description
[revoke]  
cache_time = 3600 (Integer) Time to cache the revocation list and the revocation events (in seconds). This has no effect unless global and token caching are enabled.
caching = True (Boolean) Toggle for revocation event caching. This has no effect unless global caching is enabled.
driver = sql (String) Entrypoint for an implementation of the backend for persisting revocation events in the keystone.revoke namespace. Supplied drivers are kvs and sql.
expiration_buffer = 1800 (Integer) This value (calculated in seconds) is added to token expiration before a revocation event may be removed from the backend.
Description of role configuration options
Configuration option = Default value Description
[role]  
cache_time = None (Integer) TTL (in seconds) to cache role data. This has no effect unless global caching is enabled.
caching = True (Boolean) Toggle for role caching. This has no effect unless global caching is enabled.
driver = None (String) Entrypoint for the role backend driver in the keystone.role namespace. Supplied drivers are ldap and sql.
list_limit = None (Integer) Maximum number of entities that will be returned in a role collection.
Description of SAML configuration options
Configuration option = Default value Description
[saml]  
assertion_expiration_time = 3600 (Integer) Default TTL, in seconds, for any generated SAML assertion created by Keystone.
certfile = /etc/keystone/ssl/certs/signing_cert.pem (String) Path of the certfile for SAML signing. For non-production environments, you may be interested in using keystone-manage pki_setup to generate self-signed certificates. Note, the path cannot contain a comma.
idp_contact_company = None (String) Company of contact person.
idp_contact_email = None (String) Email address of contact person.
idp_contact_name = None (String) Given name of contact person
idp_contact_surname = None (String) Surname of contact person.
idp_contact_telephone = None (String) Telephone number of contact person.
idp_contact_type = other (String) The contact type describing the main point of contact for the identity provider.
idp_entity_id = None (String) Entity ID value for unique Identity Provider identification. Usually FQDN is set with a suffix. A value is required to generate IDP Metadata. For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/idp
idp_lang = en (String) Language used by the organization.
idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml (String) Path to the Identity Provider Metadata file. This file should be generated with the keystone-manage saml_idp_metadata command.
idp_organization_display_name = None (String) Organization name to be displayed.
idp_organization_name = None (String) Organization name the installation belongs to.
idp_organization_url = None (String) URL of the organization.
idp_sso_endpoint = None (String) Identity Provider Single-Sign-On service value, required in the Identity Provider’s metadata. A value is required to generate IDP Metadata. For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/sso
keyfile = /etc/keystone/ssl/private/signing_key.pem (String) Path of the keyfile for SAML signing. Note, the path cannot contain a comma.
relay_state_prefix = ss:mem: (String) The prefix to use for the RelayState SAML attribute, used when generating ECP wrapped assertions.
xmlsec1_binary = xmlsec1 (String) Binary to be called for XML signing. Install the appropriate package, specify absolute path or adjust your PATH environment variable if the binary cannot be found.
Description of security configuration options
Configuration option = Default value Description
[DEFAULT]  
crypt_strength = 10000 (Integer) The value passed as the keyword “rounds” to passlib’s encrypt method. This option represents a trade off between security and performance. Higher values lead to slower performance, but higher security. Changing this option will only affect newly created passwords as existing password hashes already have a fixed number of rounds applied, so it is safe to tune this option in a running cluster. For more information, see https://pythonhosted.org/passlib/password_hash_api.html#choosing-the-right-rounds-value
Description of token configuration options
Configuration option = Default value Description
[token]  
allow_rescope_scoped_token = True (Boolean) Allow rescoping of scoped token. Setting allow_rescoped_scoped_token to false prevents a user from exchanging a scoped token for any other token.
bind = (List) External auth mechanisms that should add bind information to token, e.g., kerberos,x509.
cache_time = None (Integer) Time to cache tokens (in seconds). This has no effect unless global and token caching are enabled.
caching = True (Boolean) Toggle for token system caching. This has no effect unless global caching is enabled.
driver = sql (String) Entrypoint for the token persistence backend driver in the keystone.token.persistence namespace. Supplied drivers are kvs, memcache, memcache_pool, and sql.
enforce_token_bind = permissive (String) Enforcement policy on tokens presented to Keystone with bind information. One of disabled, permissive, strict, required or a specifically required bind mode, e.g., kerberos or x509 to require binding to that authentication.
expiration = 3600 (Integer) Amount of time a token should remain valid (in seconds).
hash_algorithm = md5 (String) DEPRECATED: The hash algorithm to use for PKI tokens. This can be set to any algorithm that hashlib supports. WARNING: Before changing this value, the auth_token middleware must be configured with the hash_algorithms, otherwise token revocation will not be processed correctly. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended.
infer_roles = True (Boolean) Add roles to token that are not explicitly added, but that are linked implicitly to other roles.
provider = uuid (String) Controls the token construction, validation, and revocation operations. Entrypoint in the keystone.token.provider namespace. Core providers are [fernet|pkiz|pki|uuid].
revoke_by_id = True (Boolean) Revoke token by token identifier. Setting revoke_by_id to true enables various forms of enumerating tokens, e.g. list tokens for user. These enumerations are processed to determine the list of tokens to revoke. Only disable if you are switching to using the Revoke extension with a backend other than KVS, which stores events in memory.
Description of Tokenless Authorization configuration options
Configuration option = Default value Description
[tokenless_auth]  
issuer_attribute = SSL_CLIENT_I_DN (String) The issuer attribute that is served as an IdP ID for the X.509 tokenless authorization along with the protocol to look up its corresponding mapping. It is the environment variable in the WSGI environment that references to the issuer of the client certificate.
protocol = x509 (String) The protocol name for the X.509 tokenless authorization along with the option issuer_attribute below can look up its corresponding mapping.
trusted_issuer = [] (Multi-valued) The list of trusted issuers to further filter the certificates that are allowed to participate in the X.509 tokenless authorization. If the option is absent then no certificates will be allowed. The naming format for the attributes of a Distinguished Name(DN) must be separated by a comma and contain no spaces. This configuration option may be repeated for multiple values. For example: trusted_issuer=CN=john,OU=keystone,O=openstack trusted_issuer=CN=mary,OU=eng,O=abc
Description of trust configuration options
Configuration option = Default value Description
[trust]  
allow_redelegation = False (Boolean) Enable redelegation feature.
driver = sql (String) Entrypoint for the trust backend driver in the keystone.trust namespace.
enabled = True (Boolean) Delegation and impersonation features can be optionally disabled.
max_redelegation_count = 3 (Integer) Maximum depth of trust redelegation.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Domain-specific Identity drivers

The Identity service supports domain-specific Identity drivers installed on an SQL or LDAP back end, and supports domain-specific Identity configuration options, which are stored in domain-specific configuration files. See the Admin guide Identity Management Chapter for more information.

Identity service sample configuration files

You can find the files described in this section in the /etc/keystone directory.

keystone.conf

Use the keystone.conf file to configure most Identity service options:

[DEFAULT]

#
# From keystone
#

# Using this feature is *NOT* recommended. Instead, use the `keystone-manage
# bootstrap` command. The value of this option is treated as a "shared secret"
# that can be used to bootstrap Keystone through the API. This "token" does not
# represent a user (it has no identity), and carries no explicit authorization
# (it effectively bypasses most authorization checks). If set to `None`, the
# value is ignored and the `admin_token` middleware is effectively disabled.
# However, to completely disable `admin_token` in production (highly
# recommended, as it presents a security risk), remove
# `AdminTokenAuthMiddleware` (the `admin_token_auth` filter) from your paste
# application pipelines (for example, in `keystone-paste.ini`). (string value)
#admin_token = <None>

# The base public endpoint URL for Keystone that is advertised to clients
# (NOTE: this does NOT affect how Keystone listens for connections). Defaults
# to the base host URL of the request. For example, if keystone receives a
# request to `http://server:5000/v3/users`, then this will option will be
# automatically treated as `http://server:5000`. You should only need to set
# option if either the value of the base URL contains a path that keystone does
# not automatically infer (`/prefix/v3`), or if the endpoint should be found on
# a different host. (string value)
#public_endpoint = <None>

# The base admin endpoint URL for Keystone that is advertised to clients (NOTE:
# this does NOT affect how Keystone listens for connections). Defaults to the
# base host URL of the request. For example, if keystone receives a request to
# `http://server:35357/v3/users`, then this will option will be automatically
# treated as `http://server:35357`. You should only need to set option if
# either the value of the base URL contains a path that keystone does not
# automatically infer (`/prefix/v3`), or if the endpoint should be found on a
# different host. (string value)
#admin_endpoint = <None>

# Maximum depth of the project hierarchy, excluding the project acting as a
# domain at the top of the hierarchy. WARNING: Setting it to a large value may
# adversely impact performance. (integer value)
#max_project_tree_depth = 5

# Limit the sizes of user & project ID/names. (integer value)
#max_param_size = 64

# Similar to `[DEFAULT] max_param_size`, but provides an exception for token
# values. With PKI / PKIZ tokens, this needs to be set close to 8192 (any
# higher, and other HTTP implementations may break), depending on the size of
# your service catalog and other factors. With Fernet tokens, this can be set
# as low as 255. With UUID tokens, this should be set to 32). (integer value)
#max_token_size = 8192

# Similar to the `[DEFAULT] member_role_name` option, this represents the
# default role ID used to associate users with their default projects in the v2
# API. This will be used as the explicit role where one is not specified by the
# v2 API. You do not need to set this value unless you want keystone to use an
# existing role with a different ID, other than the arbitrarily defined
# `_member_` role (in which case, you should set `[DEFAULT] member_role_name`
# as well). (string value)
#member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab

# This is the role name used in combination with the `[DEFAULT] member_role_id`
# option; see that option for more detail. You do not need to set this option
# unless you want keystone to use an existing role (in which case, you should
# set `[DEFAULT] member_role_id` as well). (string value)
#member_role_name = _member_

# The value passed as the keyword "rounds" to passlib's encrypt method. This
# option represents a trade off between security and performance. Higher values
# lead to slower performance, but higher security. Changing this option will
# only affect newly created passwords as existing password hashes already have
# a fixed number of rounds applied, so it is safe to tune this option in a
# running cluster. For more information, see
# https://pythonhosted.org/passlib/password_hash_api.html#choosing-the-right-
# rounds-value (integer value)
# Minimum value: 1000
# Maximum value: 100000
#crypt_strength = 10000

# The maximum number of entities that will be returned in a collection. This
# global limit may be then overridden for a specific driver, by specifying a
# list_limit in the appropriate section (for example, `[assignment]`). No limit
# is set by default. In larger deployments, it is recommended that you set this
# to a reasonable number to prevent operations like listing all users and
# projects from placing an unnecessary load on the system. (integer value)
#list_limit = <None>

# DEPRECATED: Set this to false if you want to enable the ability for user,
# group and project entities to be moved between domains by updating their
# `domain_id` attribute. Allowing such movement is not recommended if the scope
# of a domain admin is being restricted by use of an appropriate policy file
# (see `etc/policy.v3cloudsample.json` as an example). This feature is
# deprecated and will be removed in a future release, in favor of strictly
# immutable domain IDs. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: The option to set domain_id_immutable to false has been deprecated in
# the M release and will be removed in the O release.
#domain_id_immutable = true

# If set to true, strict password length checking is performed for password
# manipulation. If a password exceeds the maximum length, the operation will
# fail with an HTTP 403 Forbidden error. If set to false, passwords are
# automatically truncated to the maximum length. (boolean value)
#strict_password_check = false

# DEPRECATED: The HTTP header used to determine the scheme for the original
# request, even if it was removed by an SSL terminating proxy. (string value)
# This option is deprecated for removal since N.
# Its value may be silently ignored in the future.
# Reason: This option has been deprecated in the N release and will be removed
# in the P release. Use oslo.middleware.http_proxy_to_wsgi configuration
# instead.
#secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO

# If set to true, then the server will return information in HTTP responses
# that may allow an unauthenticated or authenticated user to get more
# information than normal, such as additional details about why authentication
# failed. This may be useful for debugging but is insecure. (boolean value)
#insecure_debug = false

# Default `publisher_id` for outgoing notifications. If left undefined,
# Keystone will default to using the server's host name. (string value)
#default_publisher_id = <None>

# Define the notification format for identity service events. A `basic`
# notification only has information about the resource being operated on. A
# `cadf` notification has the same information, as well as information about
# the initiator of the event. The `cadf` option is entirely backwards
# compatible with the `basic` option, but is fully CADF-compliant, and is
# recommended for auditing use cases. (string value)
# Allowed values: basic, cadf
#notification_format = basic

# If left undefined, keystone will emit notifications for all types of events.
# You can reduce the number of notifications keystone emits by using this
# option to enumerate notification topics that should be suppressed. Values are
# expected to be in the form `identity.<resource_type>.<operation>`. This field
# can be set multiple times in order to opt-out of multiple notification
# topics. For example: notification_opt_out=identity.user.create
# notification_opt_out=identity.authenticate.success (multi valued)
#notification_opt_out =

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false

# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>

# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit

# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = keystone


[assignment]

#
# From keystone
#

# Entry point for the assignment backend driver (where role assignments are
# stored) in the `keystone.assignment` namespace. Only a SQL driver is supplied
# by keystone itself. If an assignment driver is not specified, the identity
# driver will choose the assignment driver based on the deprecated
# `[identity]/driver` option (the behavior will be removed in the "O" release).
# Unless you are writing proprietary drivers for keystone, you do not need to
# set this option. (string value)
#driver = <None>

# A list of role names which are prohibited from being an implied role. (list
# value)
#prohibited_implied_role = admin


[auth]

#
# From keystone
#

# Allowed authentication methods. (list value)
#methods = external,password,token,oauth1

# Entry point for the password auth plugin module in the
# `keystone.auth.password` namespace. You do not need to set this unless you
# are overriding keystone's own password authentication plugin. (string value)
#password = <None>

# Entry point for the token auth plugin module in the `keystone.auth.token`
# namespace. You do not need to set this unless you are overriding keystone's
# own token authentication plugin. (string value)
#token = <None>

# Entry point for the external (`REMOTE_USER`) auth plugin module in the
# `keystone.auth.external` namespace. Supplied drivers are `DefaultDomain` and
# `Domain`. The default driver is `DefaultDomain`, which assumes that all users
# identified by the username specified to keystone in the `REMOTE_USER`
# variable exist within the context of the default domain. The `Domain` option
# expects an additional environment variable be presented to keystone,
# `REMOTE_DOMAIN`, containing the domain name of the `REMOTE_USER` (if
# `REMOTE_DOMAIN` is not set, then the default domain will be used instead).
# You do not need to set this unless you are taking advantage of "external
# authentication", where the application server (such as Apache) is handling
# authentication instead of keystone. (string value)
#external = <None>

# Entry point for the OAuth 1.0a auth plugin module in the
# `keystone.auth.oauth1` namespace. You do not need to set this unless you are
# overriding keystone's own `oauth1` authentication plugin. (string value)
#oauth1 = <None>


[cache]

#
# From oslo.cache
#

# Prefix for building the configuration dictionary for the cache region. This
# should not need to be changed unless there is another dogpile.cache region
# with the same configuration name. (string value)
#config_prefix = cache.oslo

# Default TTL, in seconds, for any cached item in the dogpile.cache region.
# This applies to any cached method that doesn't have an explicit cache
# expiration time defined for it. (integer value)
#expiration_time = 600

# Dogpile.cache backend module. It is recommended that Memcache or Redis
# (dogpile.cache.redis) be used in production deployments. For eventlet-based
# or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool)
# is recommended. For low thread servers, dogpile.cache.memcached is
# recommended. Test environments with a single instance of the server can use
# the dogpile.cache.memory backend. (string value)
#backend = dogpile.cache.null

# Arguments supplied to the backend module. Specify this option once per
# argument to be passed to the dogpile.cache backend. Example format:
# "<argname>:<value>". (multi valued)
#backend_argument =

# Proxy classes to import that will affect the way the dogpile.cache backend
# functions. See the dogpile.cache documentation on changing-backend-behavior.
# (list value)
#proxies =

# Global toggle for caching. (boolean value)
#enabled = true

# Extra debugging from the cache backend (cache keys, get/set/delete/etc
# calls). This is only really useful if you need to see the specific cache-
# backend get/set/delete calls with the keys/values.  Typically this should be
# left set to false. (boolean value)
#debug_cache_backend = false

# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (list value)
#memcache_servers = localhost:11211

# Number of seconds memcached server is considered dead before it is tried
# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
# (integer value)
#memcache_dead_retry = 300

# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (integer value)
#memcache_socket_timeout = 3

# Max total number of open connections to every memcached server.
# (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_maxsize = 10

# Number of seconds a connection to memcached is held unused in the pool before
# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_unused_timeout = 60

# Number of seconds that an operation will wait to get a memcache client
# connection. (integer value)
#memcache_pool_connection_get_timeout = 10


[catalog]

#
# From keystone
#

# Absolute path to the file used for the templated catalog backend. This option
# is only used if the `[catalog] driver` is set to `templated`. (string value)
#template_file = default_catalog.templates

# Entry point for the catalog driver in the `keystone.catalog` namespace.
# Keystone provides a `sql` option (which supports basic CRUD operations
# through SQL), a `templated` option (which loads the catalog from a templated
# catalog file on disk), and a `endpoint_filter.sql` option (which supports
# arbitrary service catalogs per project). (string value)
#driver = sql

# Toggle for catalog caching. This has no effect unless global caching is
# enabled. In a typical deployment, there is no reason to disable this.
# (boolean value)
#caching = true

# Time to cache catalog data (in seconds). This has no effect unless global and
# catalog caching are both enabled. Catalog data (services, endpoints, etc.)
# typically does not change frequently, and so a longer duration than the
# global default may be desirable. (integer value)
#cache_time = <None>

# Maximum number of entities that will be returned in a catalog collection.
# There is typically no reason to set this, as it would be unusual for a
# deployment to have enough services or endpoints to exceed a reasonable limit.
# (integer value)
#list_limit = <None>


[cors]

#
# From oslo.middleware
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-Domain-Id,X-Domain-Name


[cors.subdomain]

#
# From oslo.middleware
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-Domain-Id,X-Domain-Name


[credential]

#
# From keystone
#

# Entry point for the credential backend driver in the `keystone.credential`
# namespace. Keystone only provides a `sql` driver, so there's no reason to
# change this unless you are providing a custom entry point. (string value)
#driver = sql

# Entry point for credential encryption and decryption operations in the
# `keystone.credential.provider` namespace. Keystone only provides a `fernet`
# driver, so there's no reason to change this unless you are providing a custom
# entry point to encrypt and decrypt credentials. (string value)
#provider = fernet

# Directory containing Fernet keys used to encrypt and decrypt credentials
# stored in the credential backend. Fernet keys used to encrypt credentials
# have no relationship to Fernet keys used to encrypt Fernet tokens. Both sets
# of keys should be managed separately and require different rotation policies.
# Do not share this repository with the repository used to manage keys for
# Fernet tokens. (string value)
#key_repository = /etc/keystone/credential-keys/


[database]

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20


[domain_config]

#
# From keystone
#

# Entry point for the domain-specific configuration driver in the
# `keystone.resource.domain_config` namespace. Only a `sql` option is provided
# by keystone, so there is no reason to set this unless you are providing a
# custom entry point. (string value)
#driver = sql

# Toggle for caching of the domain-specific configuration backend. This has no
# effect unless global caching is enabled. There is normally no reason to
# disable this. (boolean value)
#caching = true

# Time-to-live (TTL, in seconds) to cache domain-specific configuration data.
# This has no effect unless `[domain_config] caching` is enabled. (integer
# value)
#cache_time = 300


[endpoint_filter]

#
# From keystone
#

# Entry point for the endpoint filter driver in the `keystone.endpoint_filter`
# namespace. Only a `sql` option is provided by keystone, so there is no reason
# to set this unless you are providing a custom entry point. (string value)
#driver = sql

# This controls keystone's behavior if the configured endpoint filters do not
# result in any endpoints for a user + project pair (and therefore a
# potentially empty service catalog). If set to true, keystone will return the
# entire service catalog. If set to false, keystone will return an empty
# service catalog. (boolean value)
#return_all_endpoints_if_no_filter = true


[endpoint_policy]

#
# From keystone
#

# DEPRECATED: Enable endpoint-policy functionality, which allows policies to be
# associated with either specific endpoints, or endpoints of a given service
# type. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: The option to enable the OS-ENDPOINT-POLICY API extension has been
# deprecated in the M release and will be removed in the O release. The OS-
# ENDPOINT-POLICY API extension will be enabled by default.
#enabled = true

# Entry point for the endpoint policy driver in the `keystone.endpoint_policy`
# namespace. Only a `sql` driver is provided by keystone, so there is no reason
# to set this unless you are providing a custom entry point. (string value)
#driver = sql


[eventlet_server]

#
# From keystone
#

# DEPRECATED: The IP address of the network interface for the public service to
# listen on. (string value)
# Deprecated group/name - [DEFAULT]/bind_host
# Deprecated group/name - [DEFAULT]/public_bind_host
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#public_bind_host = 0.0.0.0

# DEPRECATED: The port number for the public service to listen on. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/public_port
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#public_port = 5000

# DEPRECATED: The IP address of the network interface for the admin service to
# listen on. (string value)
# Deprecated group/name - [DEFAULT]/bind_host
# Deprecated group/name - [DEFAULT]/admin_bind_host
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#admin_bind_host = 0.0.0.0

# DEPRECATED: The port number for the admin service to listen on. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/admin_port
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#admin_port = 35357


[federation]

#
# From keystone
#

# Entry point for the federation backend driver in the `keystone.federation`
# namespace. Keystone only provides a `sql` driver, so there is no reason to
# set this option unless you are providing a custom entry point. (string value)
#driver = sql

# Prefix to use when filtering environment variable names for federated
# assertions. Matched variables are passed into the federated mapping engine.
# (string value)
#assertion_prefix =

# Value to be used to obtain the entity ID of the Identity Provider from the
# environment. For `mod_shib`, this would be `Shib-Identity-Provider`. For For
# `mod_auth_openidc`, this could be `HTTP_OIDC_ISS`. For `mod_auth_mellon`,
# this could be `MELLON_IDP`. (string value)
#remote_id_attribute = <None>

# An arbitrary domain name that is reserved to allow federated ephemeral users
# to have a domain concept. Note that an admin will not be able to create a
# domain with this name or update an existing domain to this name. You are not
# advised to change this value unless you really have to. (string value)
#federated_domain_name = Federated

# A list of trusted dashboard hosts. Before accepting a Single Sign-On request
# to return a token, the origin host must be a member of this list. This
# configuration option may be repeated for multiple values. You must set this
# in order to use web-based SSO flows. For example:
# trusted_dashboard=https://acme.example.com/auth/websso
# trusted_dashboard=https://beta.example.com/auth/websso (multi valued)
#trusted_dashboard =

# Absolute path to an HTML file used as a Single Sign-On callback handler. This
# page is expected to redirect the user from keystone back to a trusted
# dashboard host, by form encoding a token in a POST request. Keystone's
# default value should be sufficient for most deployments. (string value)
#sso_callback_template = /etc/keystone/sso_callback_template.html

# Toggle for federation caching. This has no effect unless global caching is
# enabled. There is typically no reason to disable this. (boolean value)
#caching = true


[fernet_tokens]

#
# From keystone
#

# Directory containing Fernet token keys. This directory must exist before
# using `keystone-manage fernet_setup` for the first time, must be writable by
# the user running `keystone-manage fernet_setup` or `keystone-manage
# fernet_rotate`, and of course must be readable by keystone's server process.
# The repository may contain keys in one of three states: a single staged key
# (always index 0) used for token validation, a single primary key (always the
# highest index) used for token creation and validation, and any number of
# secondary keys (all other index values) used for token validation. With
# multiple keystone nodes, each node must share the same key repository
# contents, with the exception of the staged key (index 0). It is safe to run
# `keystone-manage fernet_rotate` once on any one node to promote a staged key
# (index 0) to be the new primary (incremented from the previous highest
# index), and produce a new staged key (a new key with index 0); the resulting
# repository can then be atomically replicated to other nodes without any risk
# of race conditions (for example, it is safe to run `keystone-manage
# fernet_rotate` on host A, wait any amount of time, create a tarball of the
# directory on host A, unpack it on host B to a temporary location, and
# atomically move (`mv`) the directory into place on host B). Running
# `keystone-manage fernet_rotate` *twice* on a key repository without syncing
# other nodes will result in tokens that can not be validated by all nodes.
# (string value)
#key_repository = /etc/keystone/fernet-keys/

# This controls how many keys are held in rotation by `keystone-manage
# fernet_rotate` before they are discarded. The default value of 3 means that
# keystone will maintain one staged key (always index 0), one primary key (the
# highest numerical index), and one secondary key (every other index).
# Increasing this value means that additional secondary keys will be kept in
# the rotation. (integer value)
# Minimum value: 1
#max_active_keys = 3


[identity]

#
# From keystone
#

# This references the domain to use for all Identity API v2 requests (which are
# not aware of domains). A domain with this ID can optionally be created for
# you by `keystone-manage bootstrap`. The domain referenced by this ID cannot
# be deleted on the v3 API, to prevent accidentally breaking the v2 API. There
# is nothing special about this domain, other than the fact that it must exist
# to order to maintain support for your v2 clients. There is typically no
# reason to change this value. (string value)
#default_domain_id = default

# A subset (or all) of domains can have their own identity driver, each with
# their own partial configuration options, stored in either the resource
# backend or in a file in a domain configuration directory (depending on the
# setting of `[identity] domain_configurations_from_database`). Only values
# specific to the domain need to be specified in this manner. This feature is
# disabled by default, but may be enabled by default in a future release; set
# to true to enable. (boolean value)
#domain_specific_drivers_enabled = false

# By default, domain-specific configuration data is read from files in the
# directory identified by `[identity] domain_config_dir`. Enabling this
# configuration option allows you to instead manage domain-specific
# configurations through the API, which are then persisted in the backend
# (typically, a SQL database), rather than using configuration files on disk.
# (boolean value)
#domain_configurations_from_database = false

# Absolute path where keystone should locate domain-specific `[identity]`
# configuration files. This option has no effect unless `[identity]
# domain_specific_drivers_enabled` is set to true. There is typically no reason
# to change this value. (string value)
#domain_config_dir = /etc/keystone/domains

# Entry point for the identity backend driver in the `keystone.identity`
# namespace. Keystone provides a `sql` and `ldap` driver. This option is also
# used as the default driver selection (along with the other configuration
# variables in this section) in the event that `[identity]
# domain_specific_drivers_enabled` is enabled, but no applicable domain-
# specific configuration is defined for the domain in question. Unless your
# deployment primarily relies on `ldap` AND is not using domain-specific
# configuration, you should typically leave this set to `sql`. (string value)
#driver = sql

# Toggle for identity caching. This has no effect unless global caching is
# enabled. There is typically no reason to disable this. (boolean value)
#caching = true

# Time to cache identity data (in seconds). This has no effect unless global
# and identity caching are enabled. (integer value)
#cache_time = 600

# Maximum allowed length for user passwords. Decrease this value to improve
# performance. Changing this value does not effect existing passwords. (integer
# value)
# Maximum value: 4096
#max_password_length = 4096

# Maximum number of entities that will be returned in an identity collection.
# (integer value)
#list_limit = <None>


[identity_mapping]

#
# From keystone
#

# Entry point for the identity mapping backend driver in the
# `keystone.identity.id_mapping` namespace. Keystone only provides a `sql`
# driver, so there is no reason to change this unless you are providing a
# custom entry point. (string value)
#driver = sql

# Entry point for the public ID generator for user and group entities in the
# `keystone.identity.id_generator` namespace. The Keystone identity mapper only
# supports generators that produce 64 bytes or less. Keystone only provides a
# `sha256` entry point, so there is no reason to change this value unless
# you're providing a custom entry point. (string value)
#generator = sha256

# The format of user and group IDs changed in Juno for backends that do not
# generate UUIDs (for example, LDAP), with keystone providing a hash mapping to
# the underlying attribute in LDAP. By default this mapping is disabled, which
# ensures that existing IDs will not change. Even when the mapping is enabled
# by using domain-specific drivers (`[identity]
# domain_specific_drivers_enabled`), any users and groups from the default
# domain being handled by LDAP will still not be mapped to ensure their IDs
# remain backward compatible. Setting this value to false will enable the new
# mapping for all backends, including the default LDAP driver. It is only
# guaranteed to be safe to enable this option if you do not already have
# assignments for users and groups from the default LDAP domain, and you
# consider it to be acceptable for Keystone to provide the different IDs to
# clients than it did previously (existing IDs in the API will suddenly
# change). Typically this means that the only time you can set this value to
# false is when configuring a fresh installation, although that is the
# recommended value. (boolean value)
#backward_compatible_ids = true


[kvs]

#
# From keystone
#

# Extra `dogpile.cache` backend modules to register with the `dogpile.cache`
# library. It is not necessary to set this value unless you are providing a
# custom KVS backend beyond what `dogpile.cache` already supports. (list value)
#backends =

# Prefix for building the configuration dictionary for the KVS region. This
# should not need to be changed unless there is another `dogpile.cache` region
# with the same configuration name. (string value)
#config_prefix = keystone.kvs

# Set to false to disable using a key-mangling function, which ensures fixed-
# length keys are used in the KVS store. This is configurable for debugging
# purposes, and it is therefore highly recommended to always leave this set to
# true. (boolean value)
#enable_key_mangler = true

# Number of seconds after acquiring a distributed lock that the backend should
# consider the lock to be expired. This option should be tuned relative to the
# longest amount of time that it takes to perform a successful operation. If
# this value is set too low, then a cluster will end up performing work
# redundantly. If this value is set too high, then a cluster will not be able
# to efficiently recover and retry after a failed operation. A non-zero value
# is recommended if the backend supports lock timeouts, as zero prevents locks
# from expiring altogether. (integer value)
# Minimum value: 0
#default_lock_timeout = 5


[ldap]

#
# From keystone
#

# URL(s) for connecting to the LDAP server. Multiple LDAP URLs may be specified
# as a comma separated string. The first URL to successfully bind is used for
# the connection. (string value)
#url = ldap://localhost

# The user name of the administrator bind DN to use when querying the LDAP
# server, if your LDAP server requires it. (string value)
#user = <None>

# The password of the administrator bind DN to use when querying the LDAP
# server, if your LDAP server requires it. (string value)
#password = <None>

# The default LDAP server suffix to use, if a DN is not defined via either
# `[ldap] user_tree_dn` or `[ldap] group_tree_dn`. (string value)
#suffix = cn=example,cn=com

# DEPRECATED: If true, keystone will add a dummy member based on the `[ldap]
# dumb_member` option when creating new groups. This is required if the object
# class for groups requires the `member` attribute. This option is only used
# for write operations. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#use_dumb_member = false

# DEPRECATED: DN of the "dummy member" to use when `[ldap] use_dumb_member` is
# enabled. This option is only used for write operations. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#dumb_member = cn=dumb,dc=nonexistent

# DEPRECATED: Delete subtrees using the subtree delete control. Only enable
# this option if your LDAP server supports subtree deletion. This option is
# only used for write operations. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#allow_subtree_delete = false

# The search scope which defines how deep to search within the search base. A
# value of `one` (representing `oneLevel` or `singleLevel`) indicates a search
# of objects immediately below to the base object, but does not include the
# base object itself. A value of `sub` (representing `subtree` or
# `wholeSubtree`) indicates a search of both the base object itself and the
# entire subtree below it. (string value)
# Allowed values: one, sub
#query_scope = one

# Defines the maximum number of results per page that keystone should request
# from the LDAP server when listing objects. A value of zero (`0`) disables
# paging. (integer value)
# Minimum value: 0
#page_size = 0

# The LDAP dereferencing option to use for queries involving aliases. A value
# of `default` falls back to using default dereferencing behavior configured by
# your `ldap.conf`. A value of `never` prevents aliases from being dereferenced
# at all. A value of `searching` dereferences aliases only after name
# resolution. A value of `finding` dereferences aliases only during name
# resolution. A value of `always` dereferences aliases in all cases. (string
# value)
# Allowed values: never, searching, always, finding, default
#alias_dereferencing = default

# Sets the LDAP debugging level for LDAP calls. A value of 0 means that
# debugging is not enabled. This value is a bitmask, consult your LDAP
# documentation for possible values. (integer value)
# Minimum value: -1
#debug_level = <None>

# Sets keystone's referral chasing behavior across directory partitions. If
# left unset, the system's default behavior will be used. (boolean value)
#chase_referrals = <None>

# The search base to use for users. Defaults to the `[ldap] suffix` value.
# (string value)
#user_tree_dn = <None>

# The LDAP search filter to use for users. (string value)
#user_filter = <None>

# The LDAP object class to use for users. (string value)
#user_objectclass = inetOrgPerson

# The LDAP attribute mapped to user IDs in keystone. This must NOT be a
# multivalued attribute. User IDs are expected to be globally unique across
# keystone domains and URL-safe. (string value)
#user_id_attribute = cn

# The LDAP attribute mapped to user names in keystone. User names are expected
# to be unique only within a keystone domain and are not expected to be URL-
# safe. (string value)
#user_name_attribute = sn

# The LDAP attribute mapped to user descriptions in keystone. (string value)
#user_description_attribute = description

# The LDAP attribute mapped to user emails in keystone. (string value)
#user_mail_attribute = mail

# The LDAP attribute mapped to user passwords in keystone. (string value)
#user_pass_attribute = userPassword

# The LDAP attribute mapped to the user enabled attribute in keystone. If
# setting this option to `userAccountControl`, then you may be interested in
# setting `[ldap] user_enabled_mask` and `[ldap] user_enabled_default` as well.
# (string value)
#user_enabled_attribute = enabled

# Logically negate the boolean value of the enabled attribute obtained from the
# LDAP server. Some LDAP servers use a boolean lock attribute where "true"
# means an account is disabled. Setting `[ldap] user_enabled_invert = true`
# will allow these lock attributes to be used. This option will have no effect
# if either the `[ldap] user_enabled_mask` or `[ldap] user_enabled_emulation`
# options are in use. (boolean value)
#user_enabled_invert = false

# Bitmask integer to select which bit indicates the enabled value if the LDAP
# server represents "enabled" as a bit on an integer rather than as a discrete
# boolean. A value of `0` indicates that the mask is not used. If this is not
# set to `0` the typical value is `2`. This is typically used when `[ldap]
# user_enabled_attribute = userAccountControl`. Setting this option causes
# keystone to ignore the value of `[ldap] user_enabled_invert`. (integer value)
# Minimum value: 0
#user_enabled_mask = 0

# The default value to enable users. This should match an appropriate integer
# value if the LDAP server uses non-boolean (bitmask) values to indicate if a
# user is enabled or disabled. If this is not set to `True`, then the typical
# value is `512`. This is typically used when `[ldap] user_enabled_attribute =
# userAccountControl`. (string value)
#user_enabled_default = True

# DEPRECATED: List of user attributes to ignore on create and update. This is
# only used for write operations. (list value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_attribute_ignore = default_project_id

# The LDAP attribute mapped to a user's default_project_id in keystone. This is
# most commonly used when keystone has write access to LDAP. (string value)
#user_default_project_id_attribute = <None>

# DEPRECATED: If enabled, keystone is allowed to create users in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_allow_create = true

# DEPRECATED: If enabled, keystone is allowed to update users in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_allow_update = true

# DEPRECATED: If enabled, keystone is allowed to delete users in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_allow_delete = true

# If enabled, keystone uses an alternative method to determine if a user is
# enabled or not by checking if they are a member of the group defined by the
# `[ldap] user_enabled_emulation_dn` option. Enabling this option causes
# keystone to ignore the value of `[ldap] user_enabled_invert`. (boolean value)
#user_enabled_emulation = false

# DN of the group entry to hold enabled users when using enabled emulation.
# Setting this option has no effect unless `[ldap] user_enabled_emulation` is
# also enabled. (string value)
#user_enabled_emulation_dn = <None>

# Use the `[ldap] group_member_attribute` and `[ldap] group_objectclass`
# settings to determine membership in the emulated enabled group. Enabling this
# option has no effect unless `[ldap] user_enabled_emulation` is also enabled.
# (boolean value)
#user_enabled_emulation_use_group_config = false

# A list of LDAP attribute to keystone user attribute pairs used for mapping
# additional attributes to users in keystone. The expected format is
# `<ldap_attr>:<user_attr>`, where `ldap_attr` is the attribute in the LDAP
# object and `user_attr` is the attribute which should appear in the identity
# API. (list value)
#user_additional_attribute_mapping =

# The search base to use for groups. Defaults to the `[ldap] suffix` value.
# (string value)
#group_tree_dn = <None>

# The LDAP search filter to use for groups. (string value)
#group_filter = <None>

# The LDAP object class to use for groups. If setting this option to
# `posixGroup`, you may also be interested in enabling the `[ldap]
# group_members_are_ids` option. (string value)
#group_objectclass = groupOfNames

# The LDAP attribute mapped to group IDs in keystone. This must NOT be a
# multivalued attribute. Group IDs are expected to be globally unique across
# keystone domains and URL-safe. (string value)
#group_id_attribute = cn

# The LDAP attribute mapped to group names in keystone. Group names are
# expected to be unique only within a keystone domain and are not expected to
# be URL-safe. (string value)
#group_name_attribute = ou

# The LDAP attribute used to indicate that a user is a member of the group.
# (string value)
#group_member_attribute = member

# Enable this option if the members of the group object class are keystone user
# IDs rather than LDAP DNs. This is the case when using `posixGroup` as the
# group object class in Open Directory. (boolean value)
#group_members_are_ids = false

# The LDAP attribute mapped to group descriptions in keystone. (string value)
#group_desc_attribute = description

# DEPRECATED: List of group attributes to ignore on create and update. This is
# only used for write operations. (list value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_attribute_ignore =

# DEPRECATED: If enabled, keystone is allowed to create groups in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_allow_create = true

# DEPRECATED: If enabled, keystone is allowed to update groups in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_allow_update = true

# DEPRECATED: If enabled, keystone is allowed to delete groups in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_allow_delete = true

# A list of LDAP attribute to keystone group attribute pairs used for mapping
# additional attributes to groups in keystone. The expected format is
# `<ldap_attr>:<group_attr>`, where `ldap_attr` is the attribute in the LDAP
# object and `group_attr` is the attribute which should appear in the identity
# API. (list value)
#group_additional_attribute_mapping =

# An absolute path to a CA certificate file to use when communicating with LDAP
# servers. This option will take precedence over `[ldap] tls_cacertdir`, so
# there is no reason to set both. (string value)
#tls_cacertfile = <None>

# An absolute path to a CA certificate directory to use when communicating with
# LDAP servers. There is no reason to set this option if you've also set
# `[ldap] tls_cacertfile`. (string value)
#tls_cacertdir = <None>

# Enable TLS when communicating with LDAP servers. You should also set the
# `[ldap] tls_cacertfile` and `[ldap] tls_cacertdir` options when using this
# option. Do not set this option if you are using LDAP over SSL (LDAPS) instead
# of TLS. (boolean value)
#use_tls = false

# Specifies which checks to perform against client certificates on incoming TLS
# sessions. If set to `demand`, then a certificate will always be requested and
# required from the LDAP server. If set to `allow`, then a certificate will
# always be requested but not required from the LDAP server. If set to `never`,
# then a certificate will never be requested. (string value)
# Allowed values: demand, never, allow
#tls_req_cert = demand

# Enable LDAP connection pooling for queries to the LDAP server. There is
# typically no reason to disable this. (boolean value)
#use_pool = true

# The size of the LDAP connection pool. This option has no effect unless
# `[ldap] use_pool` is also enabled. (integer value)
# Minimum value: 1
#pool_size = 10

# The maximum number of times to attempt reconnecting to the LDAP server before
# aborting. A value of zero prevents retries. This option has no effect unless
# `[ldap] use_pool` is also enabled. (integer value)
# Minimum value: 0
#pool_retry_max = 3

# The number of seconds to wait before attempting to reconnect to the LDAP
# server. This option has no effect unless `[ldap] use_pool` is also enabled.
# (floating point value)
#pool_retry_delay = 0.1

# The connection timeout to use with the LDAP server. A value of `-1` means
# that connections will never timeout. This option has no effect unless `[ldap]
# use_pool` is also enabled. (integer value)
# Minimum value: -1
#pool_connection_timeout = -1

# The maximum connection lifetime to the LDAP server in seconds. When this
# lifetime is exceeded, the connection will be unbound and removed from the
# connection pool. This option has no effect unless `[ldap] use_pool` is also
# enabled. (integer value)
# Minimum value: 1
#pool_connection_lifetime = 600

# Enable LDAP connection pooling for end user authentication. There is
# typically no reason to disable this. (boolean value)
#use_auth_pool = true

# The size of the connection pool to use for end user authentication. This
# option has no effect unless `[ldap] use_auth_pool` is also enabled. (integer
# value)
# Minimum value: 1
#auth_pool_size = 100

# The maximum end user authentication connection lifetime to the LDAP server in
# seconds. When this lifetime is exceeded, the connection will be unbound and
# removed from the connection pool. This option has no effect unless `[ldap]
# use_auth_pool` is also enabled. (integer value)
# Minimum value: 1
#auth_pool_connection_lifetime = 60


[matchmaker_redis]

#
# From oslo.messaging
#

# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1

# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379

# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =

# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =

# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq

# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000

# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000

# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000


[memcache]

#
# From keystone
#

# Comma-separated list of memcached servers in the format of
# `host:port,host:port` that keystone should use for the `memcache` token
# persistence provider and other memcache-backed KVS drivers. This
# configuration value is NOT used for intermediary caching between keystone and
# other backends, such as SQL and LDAP (for that, see the `[cache]` section).
# Multiple keystone servers in the same deployment should use the same set of
# memcached servers to ensure that data (such as UUID tokens) created by one
# node is available to the others. (list value)
#servers = localhost:11211

# Number of seconds memcached server is considered dead before it is tried
# again. This is used by the key value store system (including, the `memcache`
# and `memcache_pool` options for the `[token] driver` persistence backend).
# (integer value)
#dead_retry = 300

# Timeout in seconds for every call to a server. This is used by the key value
# store system (including, the `memcache` and `memcache_pool` options for the
# `[token] driver` persistence backend). (integer value)
#socket_timeout = 3

# Max total number of open connections to every memcached server. This is used
# by the key value store system (including, the `memcache` and `memcache_pool`
# options for the `[token] driver` persistence backend). (integer value)
#pool_maxsize = 10

# Number of seconds a connection to memcached is held unused in the pool before
# it is closed. This is used by the key value store system (including, the
# `memcache` and `memcache_pool` options for the `[token] driver` persistence
# backend). (integer value)
#pool_unused_timeout = 60

# Number of seconds that an operation will wait to get a memcache client
# connection. This is used by the key value store system (including, the
# `memcache` and `memcache_pool` options for the `[token] driver` persistence
# backend). (integer value)
#pool_connection_get_timeout = 10


[oauth1]

#
# From keystone
#

# Entry point for the OAuth backend driver in the `keystone.oauth1` namespace.
# Typically, there is no reason to set this option unless you are providing a
# custom entry point. (string value)
#driver = sql

# Number of seconds for the OAuth Request Token to remain valid after being
# created. This is the amount of time the user has to authorize the token.
# Setting this option to zero means that request tokens will last forever.
# (integer value)
# Minimum value: 0
#request_token_duration = 28800

# Number of seconds for the OAuth Access Token to remain valid after being
# created. This is the amount of time the consumer has to interact with the
# service provider (which is typically keystone). Setting this option to zero
# means that access tokens will last forever. (integer value)
# Minimum value: 0
#access_token_duration = 86400


[os_inherit]

#
# From keystone
#

# DEPRECATED: This allows domain-based role assignments to be inherited to
# projects owned by that domain, or from parent projects to child projects.
# (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: The option to disable the OS-INHERIT functionality has been
# deprecated in the Mitaka release and will be removed in the Ocata release.
# Starting in the Ocata release, OS-INHERIT functionality will always be
# enabled.
#enabled = true


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false

# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =

# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>

# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false

# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =

# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =

# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic

# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin

# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost

# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672

# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false

# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest

# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest

# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN

# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30

# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except  those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false

# Maximum number of channels to allow (integer value)
#channel_max = <None>

# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>

# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3

# Enable SSL (boolean value)
#ssl = <None>

# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>

# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25

# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25

# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25

# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single

# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30

# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0

# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30

# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600

# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60

# Persist notification messages. (boolean value)
#notification_persistence = false

# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25

# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60

# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc

# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply

# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100

# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25

# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25


[oslo_messaging_zmq]

#
# From oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false


[oslo_middleware]

#
# From oslo.middleware
#

# The maximum body size for each  request, in bytes. (integer value)
# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
# Deprecated group/name - [DEFAULT]/max_request_body_size
#max_request_body_size = 114688

# DEPRECATED: The HTTP Header that will be used to determine what the original
# request protocol scheme was, even if it was hidden by a SSL termination
# proxy. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#secure_proxy_ssl_header = X-Forwarded-Proto

# Whether the application is behind a proxy or not. This determines if the
# middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d


[paste_deploy]

#
# From keystone
#

# Name of (or absolute path to) the Paste Deploy configuration file that
# composes middleware and the keystone application itself into actual WSGI
# entry points. See http://pythonpaste.org/deploy/ for additional documentation
# on the file's format. (string value)
#config_file = keystone-paste.ini


[policy]

#
# From keystone
#

# Entry point for the policy backend driver in the `keystone.policy` namespace.
# Supplied drivers are `rules` (which does not support any CRUD operations for
# the v3 policy API) and `sql`. Typically, there is no reason to set this
# option unless you are providing a custom entry point. (string value)
#driver = sql

# Maximum number of entities that will be returned in a policy collection.
# (integer value)
#list_limit = <None>


[profiler]

#
# From osprofiler
#

#
# Enables the profiling for all services on this node. Default value is False
# (fully disable the profiling feature).
#
# Possible values:
#
# * True: Enables the feature
# * False: Disables the feature. The profiling cannot be started via this
# project
# operations. If the profiling is triggered by another project, this project
# part
# will be empty.
#  (boolean value)
# Deprecated group/name - [profiler]/profiler_enabled
#enabled = false

#
# Enables SQL requests profiling in services. Default value is False (SQL
# requests won't be traced).
#
# Possible values:
#
# * True: Enables SQL requests profiling. Each SQL query will be part of the
# trace and can the be analyzed by how much time was spent for that.
# * False: Disables SQL requests profiling. The spent time is only shown on a
# higher level of operations. Single SQL queries cannot be analyzed this
# way.
#  (boolean value)
#trace_sqlalchemy = false

#
# Secret key(s) to use for encrypting context data for performance profiling.
# This string value should have the following format:
# <key1>[,<key2>,...<keyn>],
# where each key is some random string. A user who triggers the profiling via
# the REST API has to set one of these keys in the headers of the REST API call
# to include profiling results of this node for this particular project.
#
# Both "enabled" flag and "hmac_keys" config options should be set to enable
# profiling. Also, to generate correct profiling information across all
# services
# at least one key needs to be consistent between OpenStack projects. This
# ensures it can be used from client side to generate the trace, containing
# information from all possible resources. (string value)
#hmac_keys = SECRET_KEY

#
# Connection string for a notifier backend. Default value is messaging:// which
# sets the notifier to oslo_messaging.
#
# Examples of possible values:
#
# * messaging://: use oslo_messaging driver for sending notifications.
#  (string value)
#connection_string = messaging://


[resource]

#
# From keystone
#

# Entry point for the resource driver in the `keystone.resource` namespace.
# Only a `sql` driver is supplied by keystone. If a resource driver is not
# specified, the assignment driver will choose the resource driver to maintain
# backwards compatibility with older configuration files. (string value)
#driver = <None>

# Toggle for resource caching. This has no effect unless global caching is
# enabled. (boolean value)
# Deprecated group/name - [assignment]/caching
#caching = true

# Time to cache resource data in seconds. This has no effect unless global
# caching is enabled. (integer value)
# Deprecated group/name - [assignment]/cache_time
#cache_time = <None>

# Maximum number of entities that will be returned in a resource collection.
# (integer value)
# Deprecated group/name - [assignment]/list_limit
#list_limit = <None>

# Name of the domain that owns the `admin_project_name`. If left unset, then
# there is no admin project. `[resource] admin_project_name` must also be set
# to use this option. (string value)
#admin_project_domain_name = <None>

# This is a special project which represents cloud-level administrator
# privileges across services. Tokens scoped to this project will contain a true
# `is_admin_project` attribute to indicate to policy systems that the role
# assignments on that specific project should apply equally across every
# project. If left unset, then there is no admin project, and thus no explicit
# means of cross-project role assignments. `[resource]
# admin_project_domain_name` must also be set to use this option. (string
# value)
#admin_project_name = <None>

# This controls whether the names of projects are restricted from containing
# URL-reserved characters. If set to `new`, attempts to create or update a
# project with a URL-unsafe name will fail. If set to `strict`, attempts to
# scope a token with a URL-unsafe project name will fail, thereby forcing all
# project names to be updated to be URL-safe. (string value)
# Allowed values: off, new, strict
#project_name_url_safe = off

# This controls whether the names of domains are restricted from containing
# URL-reserved characters. If set to `new`, attempts to create or update a
# domain with a URL-unsafe name will fail. If set to `strict`, attempts to
# scope a token with a URL-unsafe domain name will fail, thereby forcing all
# domain names to be updated to be URL-safe. (string value)
# Allowed values: off, new, strict
#domain_name_url_safe = off


[revoke]

#
# From keystone
#

# Entry point for the token revocation backend driver in the `keystone.revoke`
# namespace. Keystone only provides a `sql` driver, so there is no reason to
# set this option unless you are providing a custom entry point. (string value)
#driver = sql

# The number of seconds after a token has expired before a corresponding
# revocation event may be purged from the backend. (integer value)
# Minimum value: 0
#expiration_buffer = 1800

# Toggle for revocation event caching. This has no effect unless global caching
# is enabled. (boolean value)
#caching = true

# Time to cache the revocation list and the revocation events (in seconds).
# This has no effect unless global and `[revoke] caching` are both enabled.
# (integer value)
# Deprecated group/name - [token]/revocation_cache_time
#cache_time = 3600


[role]

#
# From keystone
#

# Entry point for the role backend driver in the `keystone.role` namespace.
# Keystone only provides a `sql` driver, so there's no reason to change this
# unless you are providing a custom entry point. (string value)
#driver = <None>

# Toggle for role caching. This has no effect unless global caching is enabled.
# In a typical deployment, there is no reason to disable this. (boolean value)
#caching = true

# Time to cache role data, in seconds. This has no effect unless both global
# caching and `[role] caching` are enabled. (integer value)
#cache_time = <None>

# Maximum number of entities that will be returned in a role collection. This
# may be useful to tune if you have a large number of discrete roles in your
# deployment. (integer value)
#list_limit = <None>


[saml]

#
# From keystone
#

# Determines the lifetime for any SAML assertions generated by keystone, using
# `NotOnOrAfter` attributes. (integer value)
#assertion_expiration_time = 3600

# Name of, or absolute path to, the binary to be used for XML signing. Although
# only the XML Security Library (`xmlsec1`) is supported, it may have a non-
# standard name or path on your system. If keystone cannot find the binary
# itself, you may need to install the appropriate package, use this option to
# specify an absolute path, or adjust keystone's PATH environment variable.
# (string value)
#xmlsec1_binary = xmlsec1

# Absolute path to the public certificate file to use for SAML signing. The
# value cannot contain a comma (`,`). (string value)
#certfile = /etc/keystone/ssl/certs/signing_cert.pem

# Absolute path to the private key file to use for SAML signing. The value
# cannot contain a comma (`,`). (string value)
#keyfile = /etc/keystone/ssl/private/signing_key.pem

# This is the unique entity identifier of the identity provider (keystone) to
# use when generating SAML assertions. This value is required to generate
# identity provider metadata and must be a URI (a URL is recommended). For
# example: `https://keystone.example.com/v3/OS-FEDERATION/saml2/idp`. (uri
# value)
#idp_entity_id = <None>

# This is the single sign-on (SSO) service location of the identity provider
# which accepts HTTP POST requests. A value is required to generate identity
# provider metadata. For example: `https://keystone.example.com/v3/OS-
# FEDERATION/saml2/sso`. (uri value)
#idp_sso_endpoint = <None>

# This is the language used by the identity provider's organization. (string
# value)
#idp_lang = en

# This is the name of the identity provider's organization. (string value)
#idp_organization_name = SAML Identity Provider

# This is the name of the identity provider's organization to be displayed.
# (string value)
#idp_organization_display_name = OpenStack SAML Identity Provider

# This is the URL of the identity provider's organization. The URL referenced
# here should be useful to humans. (uri value)
#idp_organization_url = https://example.com/

# This is the company name of the identity provider's contact person. (string
# value)
#idp_contact_company = Example, Inc.

# This is the given name of the identity provider's contact person. (string
# value)
#idp_contact_name = SAML Identity Provider Support

# This is the surname of the identity provider's contact person. (string value)
#idp_contact_surname = Support

# This is the email address of the identity provider's contact person. (string
# value)
#idp_contact_email = support@example.com

# This is the telephone number of the identity provider's contact person.
# (string value)
#idp_contact_telephone = +1 800 555 0100

# This is the type of contact that best describes the identity provider's
# contact person. (string value)
# Allowed values: technical, support, administrative, billing, other
#idp_contact_type = other

# Absolute path to the identity provider metadata file. This file should be
# generated with the `keystone-manage saml_idp_metadata` command. There is
# typically no reason to change this value. (string value)
#idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml

# The prefix of the RelayState SAML attribute to use when generating enhanced
# client and proxy (ECP) assertions. In a typical deployment, there is no
# reason to change this value. (string value)
#relay_state_prefix = ss:mem:


[security_compliance]

#
# From keystone
#

# The maximum number of days a user can go without authenticating before being
# considered "inactive" and automatically disabled (locked). This feature is
# disabled by default; set any value to enable it. This feature depends on the
# `sql` backend for the `[identity] driver`. When a user exceeds this threshold
# and is considered "inactive", the user's `enabled` attribute in the HTTP API
# may not match the value of the user's `enabled` column in the user table.
# (integer value)
# Minimum value: 1
#disable_user_account_days_inactive = <None>

# The maximum number of times that a user can fail to authenticate before the
# user account is locked for the number of seconds specified by
# `[security_compliance] lockout_duration`. This feature is disabled by
# default. If this feature is enabled and `[security_compliance]
# lockout_duration` is not set, then users may be locked out indefinitely until
# the user is explicitly enabled via the API. This feature depends on the `sql`
# backend for the `[identity] driver`. (integer value)
# Minimum value: 1
#lockout_failure_attempts = <None>

# The number of seconds a user account will be locked when the maximum number
# of failed authentication attempts (as specified by `[security_compliance]
# lockout_failure_attempts`) is exceeded. Setting this option will have no
# effect unless you also set `[security_compliance] lockout_failure_attempts`
# to a non-zero value. This feature depends on the `sql` backend for the
# `[identity] driver`. (integer value)
# Minimum value: 1
#lockout_duration = 1800

# The number of days for which a password will be considered valid before
# requiring it to be changed. This feature is disabled by default. If enabled,
# new password changes will have an expiration date, however existing passwords
# would not be impacted. This feature depends on the `sql` backend for the
# `[identity] driver`. (integer value)
# Minimum value: 1
#password_expires_days = <None>

# Comma separated list of user IDs to be ignored when checking if a password is
# expired. Passwords for users in this list will not expire. This feature will
# only be enabled if `[security_compliance] password_expires_days` is set.
# (list value)
#password_expires_ignore_user_ids =

# This controls the number of previous user password iterations to keep in
# history, in order to enforce that newly created passwords are unique. Setting
# the value to one (the default) disables this feature. Thus, to enable this
# feature, values must be greater than 1. This feature depends on the `sql`
# backend for the `[identity] driver`. (integer value)
# Minimum value: 1
#unique_last_password_count = 1

# The number of days that a password must be used before the user can change
# it. This prevents users from changing their passwords immediately in order to
# wipe out their password history and reuse an old password. This feature does
# not prevent administrators from manually resetting passwords. It is disabled
# by default and allows for immediate password changes. This feature depends on
# the `sql` backend for the `[identity] driver`. Note: If
# `[security_compliance] password_expires_days` is set, then the value for this
# option should be less than the `password_expires_days`. (integer value)
# Minimum value: 0
#minimum_password_age = 0

# The regular expression used to validate password strength requirements. By
# default, the regular expression will match any password. The following is an
# example of a pattern which requires at least 1 letter, 1 digit, and have a
# minimum length of 7 characters: ^(?=.*\d)(?=.*[a-zA-Z]).{7,}$ This feature
# depends on the `sql` backend for the `[identity] driver`. (string value)
#password_regex = <None>

# Describe your password regular expression here in language for humans. If a
# password fails to match the regular expression, the contents of this
# configuration variable will be returned to users to explain why their
# requested password was insufficient. (string value)
#password_regex_description = <None>


[shadow_users]

#
# From keystone
#

# Entry point for the shadow users backend driver in the
# `keystone.identity.shadow_users` namespace. This driver is used for
# persisting local user references to externally-managed identities (via
# federation, LDAP, etc). Keystone only provides a `sql` driver, so there is no
# reason to change this option unless you are providing a custom entry point.
# (string value)
#driver = sql


[signing]

#
# From keystone
#

# DEPRECATED: Absolute path to the public certificate file to use for signing
# PKI and PKIZ tokens. Set this together with `[signing] keyfile`. For non-
# production environments, you may be interested in using `keystone-manage
# pki_setup` to generate self-signed certificates. There is no reason to set
# this option unless you are using either a `pki` or `pkiz` `[token] provider`.
# (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#certfile = /etc/keystone/ssl/certs/signing_cert.pem

# DEPRECATED: Absolute path to the private key file to use for signing PKI and
# PKIZ tokens. Set this together with `[signing] certfile`. There is no reason
# to set this option unless you are using either a `pki` or `pkiz` `[token]
# provider`. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#keyfile = /etc/keystone/ssl/private/signing_key.pem

# DEPRECATED: Absolute path to the public certificate authority (CA) file to
# use when creating self-signed certificates with `keystone-manage pki_setup`.
# Set this together with `[signing] ca_key`. There is no reason to set this
# option unless you are using a `pki` or `pkiz` `[token] provider` value in a
# non-production environment. Use a `[signing] certfile` issued from a trusted
# certificate authority instead. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#ca_certs = /etc/keystone/ssl/certs/ca.pem

# DEPRECATED: Absolute path to the private certificate authority (CA) key file
# to use when creating self-signed certificates with `keystone-manage
# pki_setup`. Set this together with `[signing] ca_certs`. There is no reason
# to set this option unless you are using a `pki` or `pkiz` `[token] provider`
# value in a non-production environment. Use a `[signing] certfile` issued from
# a trusted certificate authority instead. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#ca_key = /etc/keystone/ssl/private/cakey.pem

# DEPRECATED: Key size (in bits) to use when generating a self-signed token
# signing certificate. There is no reason to set this option unless you are
# using a `pki` or `pkiz` `[token] provider` value in a non-production
# environment. Use a `[signing] certfile` issued from a trusted certificate
# authority instead. (integer value)
# Minimum value: 1024
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#key_size = 2048

# DEPRECATED: The validity period (in days) to use when generating a self-
# signed token signing certificate. There is no reason to set this option
# unless you are using a `pki` or `pkiz` `[token] provider` value in a non-
# production environment. Use a `[signing] certfile` issued from a trusted
# certificate authority instead. (integer value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#valid_days = 3650

# DEPRECATED: The certificate subject to use when generating a self-signed
# token signing certificate. There is no reason to set this option unless you
# are using a `pki` or `pkiz` `[token] provider` value in a non-production
# environment. Use a `[signing] certfile` issued from a trusted certificate
# authority instead. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com


[token]

#
# From keystone
#

# This is a list of external authentication mechanisms which should add token
# binding metadata to tokens, such as `kerberos` or `x509`. Binding metadata is
# enforced according to the `[token] enforce_token_bind` option. (list value)
#bind =

# This controls the token binding enforcement policy on tokens presented to
# keystone with token binding metadata (as specified by the `[token] bind`
# option). `disabled` completely bypasses token binding validation.
# `permissive` and `strict` do not require tokens to have binding metadata (but
# will validate it if present), whereas `required` will always demand tokens to
# having binding metadata. `permissive` will allow unsupported binding metadata
# to pass through without validation (usually to be validated at another time
# by another component), whereas `strict` and `required` will demand that the
# included binding metadata be supported by keystone. (string value)
# Allowed values: disabled, permissive, strict, required
#enforce_token_bind = permissive

# The amount of time that a token should remain valid (in seconds). Drastically
# reducing this value may break "long-running" operations that involve multiple
# services to coordinate together, and will force users to authenticate with
# keystone more frequently. Drastically increasing this value will increase
# load on the `[token] driver`, as more tokens will be simultaneously valid.
# Keystone tokens are also bearer tokens, so a shorter duration will also
# reduce the potential security impact of a compromised token. (integer value)
# Minimum value: 0
# Maximum value: 9223372036854775807
#expiration = 3600

# Entry point for the token provider in the `keystone.token.provider`
# namespace. The token provider controls the token construction, validation,
# and revocation operations. Keystone includes `fernet`, `pkiz`, `pki`, and
# `uuid` token providers. `uuid` tokens must be persisted (using the backend
# specified in the `[token] driver` option), but do not require any extra
# configuration or setup. `fernet` tokens do not need to be persisted at all,
# but require that you run `keystone-manage fernet_setup` (also see the
# `keystone-manage fernet_rotate` command). `pki` and `pkiz` tokens can be
# validated offline, without making HTTP calls to keystone, but require that
# certificates be installed and distributed to facilitate signing tokens and
# later validating those signatures. (string value)
#provider = uuid

# Entry point for the token persistence backend driver in the
# `keystone.token.persistence` namespace. Keystone provides `kvs`, `memcache`,
# `memcache_pool`, and `sql` drivers. The `kvs` backend depends on the
# configuration in the `[kvs]` section. The `memcache` and `memcache_pool`
# options depend on the configuration in the `[memcache]` section. The `sql`
# option (default) depends on the options in your `[database]` section. If
# you're using the `fernet` `[token] provider`, this backend will not be
# utilized to persist tokens at all. (string value)
#driver = sql

# Toggle for caching token creation and validation data. This has no effect
# unless global caching is enabled. (boolean value)
#caching = true

# The number of seconds to cache token creation and validation data. This has
# no effect unless both global and `[token] caching` are enabled. (integer
# value)
# Minimum value: 0
# Maximum value: 9223372036854775807
#cache_time = <None>

# This toggles support for revoking individual tokens by the token identifier
# and thus various token enumeration operations (such as listing all tokens
# issued to a specific user). These operations are used to determine the list
# of tokens to consider revoked. Do not disable this option if you're using the
# `kvs` `[revoke] driver`. (boolean value)
#revoke_by_id = true

# This toggles whether scoped tokens may be be re-scoped to a new project or
# domain, thereby preventing users from exchanging a scoped token (including
# those with a default project scope) for any other token. This forces users to
# either authenticate for unscoped tokens (and later exchange that unscoped
# token for tokens with a more specific scope) or to provide their credentials
# in every request for a scoped token to avoid re-scoping altogether. (boolean
# value)
#allow_rescope_scoped_token = true

# DEPRECATED: This controls the hash algorithm to use to uniquely identify PKI
# tokens without having to transmit the entire token to keystone (which may be
# several kilobytes). This can be set to any algorithm that hashlib supports.
# WARNING: Before changing this value, the `auth_token` middleware protecting
# all other services must be configured with the set of hash algorithms to
# expect from keystone (both your old and new value for this option), otherwise
# token revocation will not be processed correctly. (string value)
# Allowed values: md5, sha1, sha224, sha256, sha384, sha512
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#hash_algorithm = md5

# This controls whether roles should be included with tokens that are not
# directly assigned to the token's scope, but are instead linked implicitly to
# other role assignments. (boolean value)
#infer_roles = true

# Enable storing issued token data to token validation cache so that first
# token validation doesn't actually cause full validation cycle. (boolean
# value)
#cache_on_issue = false


[tokenless_auth]

#
# From keystone
#

# The list of distinguished names which identify trusted issuers of client
# certificates allowed to use X.509 tokenless authorization. If the option is
# absent then no certificates will be allowed. The format for the values of a
# distinguished name (DN) must be separated by a comma and contain no spaces.
# Furthermore, because an individual DN may contain commas, this configuration
# option may be repeated multiple times to represent multiple values. For
# example, keystone.conf would include two consecutive lines in order to trust
# two different DNs, such as `trusted_issuer = CN=john,OU=keystone,O=openstack`
# and `trusted_issuer = CN=mary,OU=eng,O=abc`. (multi valued)
#trusted_issuer =

# The federated protocol ID used to represent X.509 tokenless authorization.
# This is used in combination with the value of `[tokenless_auth]
# issuer_attribute` to find a corresponding federated mapping. In a typical
# deployment, there is no reason to change this value. (string value)
#protocol = x509

# The name of the WSGI environment variable used to pass the issuer of the
# client certificate to keystone. This attribute is used as an identity
# provider ID for the X.509 tokenless authorization along with the protocol to
# look up its corresponding mapping. In a typical deployment, there is no
# reason to change this value. (string value)
#issuer_attribute = SSL_CLIENT_I_DN


[trust]

#
# From keystone
#

# Delegation and impersonation features using trusts can be optionally
# disabled. (boolean value)
#enabled = true

# Allows authorization to be redelegated from one user to another, effectively
# chaining trusts together. When disabled, the `remaining_uses` attribute of a
# trust is constrained to be zero. (boolean value)
#allow_redelegation = false

# Maximum number of times that authorization can be redelegated from one user
# to another in a chain of trusts. This number may be reduced further for a
# specific trust. (integer value)
#max_redelegation_count = 3

# Entry point for the trust backend driver in the `keystone.trust` namespace.
# Keystone only provides a `sql` driver, so there is no reason to change this
# unless you are providing a custom entry point. (string value)
#driver = sql
keystone-paste.ini

Use the keystone-paste.ini file to configure the Web Service Gateway Interface (WSGI) middleware pipeline for the Identity service:

# Keystone PasteDeploy configuration file.

[filter:debug]
use = egg:oslo.middleware#debug

[filter:request_id]
use = egg:oslo.middleware#request_id

[filter:build_auth_context]
use = egg:keystone#build_auth_context

[filter:token_auth]
use = egg:keystone#token_auth

[filter:admin_token_auth]
# This is deprecated in the M release and will be removed in the O release.
# Use `keystone-manage bootstrap` and remove this from the pipelines below.
use = egg:keystone#admin_token_auth

[filter:json_body]
use = egg:keystone#json_body

[filter:cors]
use = egg:oslo.middleware#cors
oslo_config_project = keystone

[filter:http_proxy_to_wsgi]
use = egg:oslo.middleware#http_proxy_to_wsgi

[filter:ec2_extension]
use = egg:keystone#ec2_extension

[filter:ec2_extension_v3]
use = egg:keystone#ec2_extension_v3

[filter:s3_extension]
use = egg:keystone#s3_extension

[filter:url_normalize]
use = egg:keystone#url_normalize

[filter:sizelimit]
use = egg:oslo.middleware#sizelimit

[filter:osprofiler]
use = egg:osprofiler#osprofiler

[app:public_service]
use = egg:keystone#public_service

[app:service_v3]
use = egg:keystone#service_v3

[app:admin_service]
use = egg:keystone#admin_service

[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension public_service

[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension s3_extension admin_service

[pipeline:api_v3]
# The last item in this pipeline must be service_v3 or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3

[app:public_version_service]
use = egg:keystone#public_version_service

[app:admin_version_service]
use = egg:keystone#admin_version_service

[pipeline:public_version_api]
pipeline = cors sizelimit osprofiler url_normalize public_version_service

[pipeline:admin_version_api]
pipeline = cors sizelimit osprofiler url_normalize admin_version_service

[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api

[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
logging.conf

You can specify a special logging configuration file in the keystone.conf configuration file. For example, /etc/keystone/logging.conf.

For details, see the Python logging module documentation.

[loggers]
keys=root,access

[handlers]
keys=production,file,access_file,devel

[formatters]
keys=minimal,normal,debug


###########
# Loggers #
###########

[logger_root]
level=WARNING
handlers=file

[logger_access]
level=INFO
qualname=access
handlers=access_file


################
# Log Handlers #
################

[handler_production]
class=handlers.SysLogHandler
level=ERROR
formatter=normal
args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER)

[handler_file]
class=handlers.WatchedFileHandler
level=WARNING
formatter=normal
args=('error.log',)

[handler_access_file]
class=handlers.WatchedFileHandler
level=INFO
formatter=minimal
args=('access.log',)

[handler_devel]
class=StreamHandler
level=NOTSET
formatter=debug
args=(sys.stdout,)


##################
# Log Formatters #
##################

[formatter_minimal]
format=%(message)s

[formatter_normal]
format=(%(name)s): %(asctime)s %(levelname)s %(message)s

[formatter_debug]
format=(%(name)s): %(asctime)s %(levelname)s %(module)s %(funcName)s %(message)s
policy.json

Use the policy.json file to define additional access controls that apply to the Identity service:

{
    "admin_required": "role:admin or is_admin:1",
    "service_role": "role:service",
    "service_or_admin": "rule:admin_required or rule:service_role",
    "owner" : "user_id:%(user_id)s",
    "admin_or_owner": "rule:admin_required or rule:owner",
    "token_subject": "user_id:%(target.token.user_id)s",
    "admin_or_token_subject": "rule:admin_required or rule:token_subject",
    "service_admin_or_token_subject": "rule:service_or_admin or rule:token_subject",

    "default": "rule:admin_required",

    "identity:get_region": "",
    "identity:list_regions": "",
    "identity:create_region": "rule:admin_required",
    "identity:update_region": "rule:admin_required",
    "identity:delete_region": "rule:admin_required",

    "identity:get_service": "rule:admin_required",
    "identity:list_services": "rule:admin_required",
    "identity:create_service": "rule:admin_required",
    "identity:update_service": "rule:admin_required",
    "identity:delete_service": "rule:admin_required",

    "identity:get_endpoint": "rule:admin_required",
    "identity:list_endpoints": "rule:admin_required",
    "identity:create_endpoint": "rule:admin_required",
    "identity:update_endpoint": "rule:admin_required",
    "identity:delete_endpoint": "rule:admin_required",

    "identity:get_domain": "rule:admin_required or token.project.domain.id:%(target.domain.id)s",
    "identity:list_domains": "rule:admin_required",
    "identity:create_domain": "rule:admin_required",
    "identity:update_domain": "rule:admin_required",
    "identity:delete_domain": "rule:admin_required",

    "identity:get_project": "rule:admin_required or project_id:%(target.project.id)s",
    "identity:list_projects": "rule:admin_required",
    "identity:list_user_projects": "rule:admin_or_owner",
    "identity:create_project": "rule:admin_required",
    "identity:update_project": "rule:admin_required",
    "identity:delete_project": "rule:admin_required",

    "identity:get_user": "rule:admin_or_owner",
    "identity:list_users": "rule:admin_required",
    "identity:create_user": "rule:admin_required",
    "identity:update_user": "rule:admin_required",
    "identity:delete_user": "rule:admin_required",
    "identity:change_password": "rule:admin_or_owner",

    "identity:get_group": "rule:admin_required",
    "identity:list_groups": "rule:admin_required",
    "identity:list_groups_for_user": "rule:admin_or_owner",
    "identity:create_group": "rule:admin_required",
    "identity:update_group": "rule:admin_required",
    "identity:delete_group": "rule:admin_required",
    "identity:list_users_in_group": "rule:admin_required",
    "identity:remove_user_from_group": "rule:admin_required",
    "identity:check_user_in_group": "rule:admin_required",
    "identity:add_user_to_group": "rule:admin_required",

    "identity:get_credential": "rule:admin_required",
    "identity:list_credentials": "rule:admin_required",
    "identity:create_credential": "rule:admin_required",
    "identity:update_credential": "rule:admin_required",
    "identity:delete_credential": "rule:admin_required",

    "identity:ec2_get_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)",
    "identity:ec2_list_credentials": "rule:admin_or_owner",
    "identity:ec2_create_credential": "rule:admin_or_owner",
    "identity:ec2_delete_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)",

    "identity:get_role": "rule:admin_required",
    "identity:list_roles": "rule:admin_required",
    "identity:create_role": "rule:admin_required",
    "identity:update_role": "rule:admin_required",
    "identity:delete_role": "rule:admin_required",
    "identity:get_domain_role": "rule:admin_required",
    "identity:list_domain_roles": "rule:admin_required",
    "identity:create_domain_role": "rule:admin_required",
    "identity:update_domain_role": "rule:admin_required",
    "identity:delete_domain_role": "rule:admin_required",

    "identity:get_implied_role": "rule:admin_required ",
    "identity:list_implied_roles": "rule:admin_required",
    "identity:create_implied_role": "rule:admin_required",
    "identity:delete_implied_role": "rule:admin_required",
    "identity:list_role_inference_rules": "rule:admin_required",
    "identity:check_implied_role": "rule:admin_required",

    "identity:check_grant": "rule:admin_required",
    "identity:list_grants": "rule:admin_required",
    "identity:create_grant": "rule:admin_required",
    "identity:revoke_grant": "rule:admin_required",

    "identity:list_role_assignments": "rule:admin_required",
    "identity:list_role_assignments_for_tree": "rule:admin_required",

    "identity:get_policy": "rule:admin_required",
    "identity:list_policies": "rule:admin_required",
    "identity:create_policy": "rule:admin_required",
    "identity:update_policy": "rule:admin_required",
    "identity:delete_policy": "rule:admin_required",

    "identity:check_token": "rule:admin_or_token_subject",
    "identity:validate_token": "rule:service_admin_or_token_subject",
    "identity:validate_token_head": "rule:service_or_admin",
    "identity:revocation_list": "rule:service_or_admin",
    "identity:revoke_token": "rule:admin_or_token_subject",

    "identity:create_trust": "user_id:%(trust.trustor_user_id)s",
    "identity:list_trusts": "",
    "identity:list_roles_for_trust": "",
    "identity:get_role_for_trust": "",
    "identity:delete_trust": "",

    "identity:create_consumer": "rule:admin_required",
    "identity:get_consumer": "rule:admin_required",
    "identity:list_consumers": "rule:admin_required",
    "identity:delete_consumer": "rule:admin_required",
    "identity:update_consumer": "rule:admin_required",

    "identity:authorize_request_token": "rule:admin_required",
    "identity:list_access_token_roles": "rule:admin_required",
    "identity:get_access_token_role": "rule:admin_required",
    "identity:list_access_tokens": "rule:admin_required",
    "identity:get_access_token": "rule:admin_required",
    "identity:delete_access_token": "rule:admin_required",

    "identity:list_projects_for_endpoint": "rule:admin_required",
    "identity:add_endpoint_to_project": "rule:admin_required",
    "identity:check_endpoint_in_project": "rule:admin_required",
    "identity:list_endpoints_for_project": "rule:admin_required",
    "identity:remove_endpoint_from_project": "rule:admin_required",

    "identity:create_endpoint_group": "rule:admin_required",
    "identity:list_endpoint_groups": "rule:admin_required",
    "identity:get_endpoint_group": "rule:admin_required",
    "identity:update_endpoint_group": "rule:admin_required",
    "identity:delete_endpoint_group": "rule:admin_required",
    "identity:list_projects_associated_with_endpoint_group": "rule:admin_required",
    "identity:list_endpoints_associated_with_endpoint_group": "rule:admin_required",
    "identity:get_endpoint_group_in_project": "rule:admin_required",
    "identity:list_endpoint_groups_for_project": "rule:admin_required",
    "identity:add_endpoint_group_to_project": "rule:admin_required",
    "identity:remove_endpoint_group_from_project": "rule:admin_required",

    "identity:create_identity_provider": "rule:admin_required",
    "identity:list_identity_providers": "rule:admin_required",
    "identity:get_identity_providers": "rule:admin_required",
    "identity:update_identity_provider": "rule:admin_required",
    "identity:delete_identity_provider": "rule:admin_required",

    "identity:create_protocol": "rule:admin_required",
    "identity:update_protocol": "rule:admin_required",
    "identity:get_protocol": "rule:admin_required",
    "identity:list_protocols": "rule:admin_required",
    "identity:delete_protocol": "rule:admin_required",

    "identity:create_mapping": "rule:admin_required",
    "identity:get_mapping": "rule:admin_required",
    "identity:list_mappings": "rule:admin_required",
    "identity:delete_mapping": "rule:admin_required",
    "identity:update_mapping": "rule:admin_required",

    "identity:create_service_provider": "rule:admin_required",
    "identity:list_service_providers": "rule:admin_required",
    "identity:get_service_provider": "rule:admin_required",
    "identity:update_service_provider": "rule:admin_required",
    "identity:delete_service_provider": "rule:admin_required",

    "identity:get_auth_catalog": "",
    "identity:get_auth_projects": "",
    "identity:get_auth_domains": "",

    "identity:list_projects_for_user": "",
    "identity:list_domains_for_user": "",

    "identity:list_revoke_events": "",

    "identity:create_policy_association_for_endpoint": "rule:admin_required",
    "identity:check_policy_association_for_endpoint": "rule:admin_required",
    "identity:delete_policy_association_for_endpoint": "rule:admin_required",
    "identity:create_policy_association_for_service": "rule:admin_required",
    "identity:check_policy_association_for_service": "rule:admin_required",
    "identity:delete_policy_association_for_service": "rule:admin_required",
    "identity:create_policy_association_for_region_and_service": "rule:admin_required",
    "identity:check_policy_association_for_region_and_service": "rule:admin_required",
    "identity:delete_policy_association_for_region_and_service": "rule:admin_required",
    "identity:get_policy_for_endpoint": "rule:admin_required",
    "identity:list_endpoints_for_policy": "rule:admin_required",

    "identity:create_domain_config": "rule:admin_required",
    "identity:get_domain_config": "rule:admin_required",
    "identity:update_domain_config": "rule:admin_required",
    "identity:delete_domain_config": "rule:admin_required",
    "identity:get_domain_config_default": "rule:admin_required"
}

Caching layer

Identity supports a caching layer that is above the configurable subsystems, such as token or assignment. The majority of the caching configuration options are set in the [cache] section. However, each section that has the capability to be cached usually has a caching option that will toggle caching for that specific section. By default, caching is globally disabled. Options are as follows:

Description of cache configuration options
Configuration option = Default value Description
[memcache]  
dead_retry = 300 (Integer) Number of seconds memcached server is considered dead before it is tried again. This is used by the key value store system (e.g. token pooled memcached persistence backend).
pool_connection_get_timeout = 10 (Integer) Number of seconds that an operation will wait to get a memcache client connection. This is used by the key value store system (e.g. token pooled memcached persistence backend).
pool_maxsize = 10 (Integer) Max total number of open connections to every memcached server. This is used by the key value store system (e.g. token pooled memcached persistence backend).
pool_unused_timeout = 60 (Integer) Number of seconds a connection to memcached is held unused in the pool before it is closed. This is used by the key value store system (e.g. token pooled memcached persistence backend).

Current functional back ends are:

dogpile.cache.memcached
Memcached back end using the standard python-memcached library.
dogpile.cache.pylibmc
Memcached back end using the pylibmc library.
dogpile.cache.bmemcached
Memcached using the python-binary-memcached library.
dogpile.cache.redis
Redis back end.
dogpile.cache.dbm
Local DBM file back end.
dogpile.cache.memory
In-memory cache, not suitable for use outside of testing as it does not cleanup its internal cache on cache expiration and does not share cache between processes. This means that caching and cache invalidation will not be consistent or reliable.
dogpile.cache.mongo
MongoDB as caching back end.

New, updated, and deprecated options in Newton for Identity service

Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None

This chapter details the Identity service configuration options. For installation prerequisites and step-by-step walkthroughs, see the Newton Installation Tutorials and Guides for your distribution and OpenStack Administrator Guide.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Image service

Image API configuration

The Image service has two APIs: the user-facing API, and the registry API, which is for internal requests that require access to the database.

Both of the APIs currently have two major versions: v1 (SUPPORTED) and v2 (CURRENT). You can run either or both versions by setting appropriate values of enable_v1_api, enable_v2_api, enable_v1_registry, and enable_v2_registry. If the v2 API is used, running glance-registry is optional, as v2 of glance-api can connect directly to the database.

To assist you in formulating your deployment strategy for the Image APIs, the Glance team has published a statement concerning the status and development plans of the APIs: Using public Image API.

Configuration options

Tables of all the options used to configure the APIs, including enabling SSL and modifying WSGI settings are found below.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_role = admin (String) Role used to identify an authenticated user as administrator.$sentinal$Provide a string value representing a Keystone role to identify an administrative user. Users with this role will be granted administrative privileges. The default value for this option is ‘admin’.$sentinal$Possible values: * A string value which is a valid Keystone role$sentinal$Related options: * None
allow_anonymous_access = False (Boolean) Allow limited access to unauthenticated users.$sentinal$Assign a boolean to determine API access for unathenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware.$sentinal$Possible values: * True * False$sentinal$Related options: * None
available_plugins = (List) A list of artifacts that are allowed in the format name or name-version. Empty list means that any artifact can be loaded.
client_socket_timeout = 900 (Integer) Timeout for client connections’ socket operations.$sentinal$Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds.$sentinal$The value zero implies wait forever.$sentinal$Possible values: * Zero * Positive integer$sentinal$Related options: * None
enable_v1_api = True (Boolean) Deploy the v1 OpenStack Images API.$sentinal$When this option is set to True, Glance service will respond to requests on registered endpoints conforming to the v1 OpenStack Images API.$sentinal$NOTES: * If this option is enabled, then enable_v1_registry must also be set to True to enable mandatory usage of Registry service with v1 API.$sentinal$ * If this option is disabled, then the enable_v1_registry option, which is enabled by default, is also recommended to be disabled.$sentinal$ * This option is separate from enable_v2_api, both v1 and v2 OpenStack Images API can be deployed independent of each other.$sentinal$ * If deploying only the v2 Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v1_registry * enable_v2_api
enable_v1_registry = True (Boolean) Deploy the v1 API Registry service.$sentinal$When this option is set to True, the Registry service will be enabled in Glance for v1 API requests.$sentinal$NOTES: * Use of Registry is mandatory in v1 API, so this option must be set to True if the enable_v1_api option is enabled.$sentinal$ * If deploying only the v2 OpenStack Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v1_api
enable_v2_api = True (Boolean) Deploy the v2 OpenStack Images API.$sentinal$When this option is set to True, Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API.$sentinal$NOTES: * If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled.$sentinal$ * This option is separate from enable_v1_api, both v1 and v2 OpenStack Images API can be deployed independent of each other.$sentinal$ * If deploying only the v1 Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v2_registry * enable_v1_api
enable_v2_registry = True (Boolean) Deploy the v2 API Registry service.$sentinal$When this option is set to True, the Registry service will be enabled in Glance for v2 API requests.$sentinal$NOTES: * Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api.$sentinal$ * If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v2_api * data_api
http_keepalive = True (Boolean) Set keep alive option for HTTP over TCP.$sentinal$Provide a boolean value to determine sending of keep alive packets. If set to False, the server returns the header “Connection: close”. If set to True, the server returns a “Connection: Keep-Alive” in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request.$sentinal$This option must be set to False if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client.$sentinal$Possible values: * True * False$sentinal$Related options: * None
image_size_cap = 1099511627776 (Integer) Maximum size of image a user can upload in bytes.$sentinal$An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB).$sentinal$NOTES: * This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). * This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available.$sentinal$Possible values: * Any positive number less than or equal to 9223372036854775808
load_enabled = True (Boolean) When false, no artifacts can be loaded regardless of available_plugins. When true, artifacts can be loaded.
location_strategy = location_order (String) Strategy to determine the preference order of image locations.$sentinal$This configuration option indicates the strategy to determine the order in which an image’s locations must be accessed to serve the image’s data. Glance then retrieves the image data from the first responsive active location it finds in this list.$sentinal$This option takes one of two possible values location_order and store_type. The default value is location_order, which suggests that image data be served by using locations in the order they are stored in Glance. The store_type value sets the image location preference based on the order in which the storage backends are listed as a comma separated list for the configuration option store_type_preference.$sentinal$Possible values: * location_order * store_type$sentinal$Related options: * store_type_preference
max_header_line = 16384 (Integer) Maximum line size of message headers.$sentinal$Provide an integer value representing a length to limit the size of message headers. The default value is 16384.$sentinal$NOTE: max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for max_header_line would flood the logs.$sentinal$Setting max_header_line to 0 sets no limit for the line size of message headers.$sentinal$Possible values: * 0 * Positive integer$sentinal$Related options: * None
max_request_id_length = 64 (Integer) Limit the request ID length.$sentinal$Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs.$sentinal$Possible values: * Integer value between 0 and 16384$sentinal$Related options: * None
owner_is_tenant = True (Boolean) Set the image owner to tenant or the authenticated user.$sentinal$Assign a boolean value to determine the owner of an image. When set to True, the owner of the image is the tenant. When set to False, the owner of the image will be the authenticated user issuing the request. Setting it to False makes the image private to the associated user and sharing with other users within the same tenant (or “project”) requires explicit image sharing via image membership.$sentinal$Possible values: * True * False$sentinal$Related options: * None
public_endpoint = None (String) Public url endpoint to use for Glance/Glare versions response.$sentinal$This is the public url endpoint that will appear in the Glance/Glare “versions” response. If no value is specified, the endpoint that is displayed in the version’s response is that of the host running the API service. Change the endpoint to represent the proxy URL if the API service is running behind a proxy. If the service is running behind a load balancer, add the load balancer’s URL for this value.$sentinal$Possible values: * None * Proxy URL * Load balancer URL$sentinal$Related options: * None
secure_proxy_ssl_header = None (String) DEPRECATED: The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is “HTTP_X_FORWARDED_PROTO”. Use the http_proxy_to_wsgi middleware instead.
send_identity_headers = False (Boolean) Send headers received from identity when making requests to registry.$sentinal$Typically, Glance registry can be deployed in multiple flavors, which may or may not include authentication. For example, trusted-auth is a flavor that does not require the registry service to authenticate the requests it receives. However, the registry service may still need a user context to be populated to serve the requests. This can be achieved by the caller (the Glance API usually) passing through the headers it received from authenticating with identity for the same request. The typical headers sent are X-User-Id, X-Tenant-Id, X-Roles, X-Identity-Status and X-Service-Catalog.$sentinal$Provide a boolean value to determine whether to send the identity headers to provide tenant and user information along with the requests to registry service. By default, this option is set to False, which means that user and tenant information is not available readily. It must be obtained by authenticating. Hence, if this is set to False, flavor must be set to value that either includes authentication or authenticated user context.$sentinal$Possible values: * True * False$sentinal$Related options: * flavor
show_multiple_locations = False (Boolean) DEPRECATED: Show all image locations when returning an image.$sentinal$This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy. The image locations are shown under the image property locations.$sentinal$NOTES: * Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! * If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False.$sentinal$Possible values: * True * False$sentinal$Related options: * show_image_direct_url * location_strategy This option will be removed in the Ocata release because the same functionality can be achieved with greater granularity by using policies. Please see the Newton release notes for more information.
tcp_keepidle = 600 (Integer) Set the wait time before a connection recheck.$sentinal$Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds.$sentinal$Setting tcp_keepidle helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment.$sentinal$Possible values: * Positive integer value representing time in seconds$sentinal$Related options: * None
use_user_token = True (Boolean) DEPRECATED: Whether to pass through the user token when making requests to the registry. To prevent failures with token expiration during big files upload, it is recommended to set this parameter to False.If “use_user_token” is not in effect, then admin credentials can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support.
[glance_store]  
default_store = file (String) The default scheme to use for storing images.$sentinal$Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store.$sentinal$NOTE: The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option.$sentinal$Possible values: * file * filesystem * http * https * swift * swift+http * swift+https * swift+config * rbd * sheepdog * cinder * vsphere$sentinal$Related Options: * stores
store_capabilities_update_min_interval = 0 (Integer) Minimum interval in seconds to execute updating dynamic storage capabilities based on current backend status.$sentinal$Provide an integer value representing time in seconds to set the minimum interval before an update of dynamic storage capabilities for a storage backend can be attempted. Setting store_capabilities_update_min_interval does not mean updates occur periodically based on the set interval. Rather, the update is performed at the elapse of this interval set, if an operation of the store is triggered.$sentinal$By default, this option is set to zero and is disabled. Provide an integer value greater than zero to enable this option.$sentinal$NOTE: For more information on store capabilities and their updates, please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo/store-capabilities.html$sentinal$For more information on setting up a particular store in your deplyment and help with the usage of this feature, please contact the storage driver maintainers listed here: http://docs.openstack.org/developer/glance_store/drivers/index.html$sentinal$Possible values: * Zero * Positive integer$sentinal$Related Options: * None
stores = file, http (List) List of enabled Glance stores.$sentinal$Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http.$sentinal$Possible values: * A comma separated list that could include: * file * http * swift * rbd * sheepdog * cinder * vmware$sentinal$Related Options: * default_store
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[paste_deploy]  
config_file = glance-api-paste.ini (String) Name of the paste configuration file.$sentinal$Provide a string value representing the name of the paste configuration file to use for configuring piplelines for server application deployments.$sentinal$NOTES: * Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. * The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors.$sentinal$If no value is specified for this option, the paste.ini file with the prefix of the corresponding Glance service’s configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in glance-api.conf, the service will look for a file named glance-api-paste.ini.) If the paste configuration file is not found, the service will not start.$sentinal$Possible values: * A string value representing the name of the paste configuration file.$sentinal$Related Options: * flavor
flavor = keystone (String) Deployment flavor to use in the server application pipeline.$sentinal$Provide a string value representing the appropriate deployment flavor used in the server application pipleline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed.$sentinal$For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set flavor to keystone.$sentinal$Possible values: * String value representing a partial pipeline name.$sentinal$Related Options: * config_file
[store_type_location_strategy]  
store_type_preference = (List) Preference order of storage backends.$sentinal$Provide a comma separated list of store names in the order in which images should be retrieved from storage backends. These store names must be registered with the stores configuration option.$sentinal$NOTE: The store_type_preference configuration option is applied only if store_type is chosen as a value for the location_strategy configuration option. An empty list will not change the location order.$sentinal$Possible values: * Empty list * Comma separated list of registered store names. Legal values are: * file * http * rbd * swift * sheepdog * cinder * vmware$sentinal$Related options: * location_strategy * stores
Description of CA and SSL configuration options
Configuration option = Default value Description
[DEFAULT]  
ca_file = /etc/ssl/cafile (String) Absolute path to the CA file.$sentinal$Provide a string value representing a valid absolute path to the Certificate Authority file to use for client authentication.$sentinal$A CA file typically contains necessary trusted certificates to use for the client authentication. This is essential to ensure that a secure connection is established to the server via the internet.$sentinal$Possible values: * Valid absolute path to the CA file$sentinal$Related options: * None
cert_file = /etc/ssl/certs (String) Absolute path to the certificate file.$sentinal$Provide a string value representing a valid absolute path to the certificate file which is required to start the API service securely.$sentinal$A certificate file typically is a public key container and includes the server’s public key, server name, server information and the signature which was a result of the verification process using the CA certificate. This is required for a secure connection establishment.$sentinal$Possible values: * Valid absolute path to the certificate file$sentinal$Related options: * None
key_file = /etc/ssl/key/key-file.pem (String) Absolute path to a private key file.$sentinal$Provide a string value representing a valid absolute path to a private key file which is required to establish the client-server connection.$sentinal$Possible values: * Absolute path to the private key file$sentinal$Related options: * None

Configure back ends

The Image service supports several back ends for storing virtual machine images:

  • Block Storage service (cinder)
  • A directory on a local file system
  • HTTP
  • Ceph RBD
  • Sheepdog
  • Object Storage service (swift)
  • VMware ESX

Note

You must use only raw image formats with the Ceph RBD back end.

The following tables detail the options available for each.

Description of cinder configuration options
Configuration option = Default value Description
[glance_store]  
cinder_api_insecure = False (Boolean) Allow to perform insecure SSL requests to cinder.$sentinal$If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option.$sentinal$Possible values: * True * False$sentinal$Related options: * cinder_ca_certificates_file
cinder_ca_certificates_file = None (String) Location of a CA certificates file used for cinder client requests.$sentinal$The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification.$sentinal$Possible values: * Path to a ca certificates file$sentinal$Related options: * cinder_api_insecure
cinder_catalog_info = volumev2::publicURL (String) Information to match when looking for cinder in the service catalog.$sentinal$When the cinder_endpoint_template is not set and any of cinder_store_auth_address, cinder_store_user_name, cinder_store_project_name, cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name, if set, is taken into consideration to fetch the appropriate endpoint.$sentinal$The service catalog can be listed by the openstack catalog list command.$sentinal$Possible values: * A string of of the following form: <service_type>:<service_name>:<endpoint_type> At least service_type and endpoint_type should be specified. service_name can be omitted.$sentinal$Related options: * cinder_os_region_name * cinder_endpoint_template * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password
cinder_endpoint_template = None (String) Override service catalog lookup with template for cinder endpoint.$sentinal$When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address, cinder_store_user_name, cinder_store_project_name, and cinder_store_password are specified.$sentinal$If this configuration option is set, cinder_catalog_info will be ignored.$sentinal$Possible values: * URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s$sentinal$ Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password * cinder_catalog_info
cinder_http_retries = 3 (Integer) Number of cinderclient retries on failed http calls.$sentinal$When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds.$sentinal$Possible values: * A positive integer$sentinal$Related options: * None
cinder_os_region_name = None (String) Region name to lookup cinder service from the service catalog.$sentinal$This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region.$sentinal$Possible values: * A string that is a valid region name.$sentinal$Related options: * cinder_catalog_info
cinder_state_transition_timeout = 300 (Integer) Time period, in seconds, to wait for a cinder volume transition to complete.$sentinal$When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume’s state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error`), the image creation fails.$sentinal$Possible values: * A positive integer$sentinal$Related options: * None
cinder_store_auth_address = None (String) The address where the cinder authentication service is listening.$sentinal$When all of cinder_store_auth_address, cinder_store_user_name, cinder_store_project_name, and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance’s ACL.$sentinal$If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context’s user and project are used.$sentinal$Possible values: * A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 $sentinal$ Related options: * cinder_store_user_name * cinder_store_password * cinder_store_project_name
cinder_store_password = None (String) Password for the user authenticating against cinder.$sentinal$This must be used with all the following related options. If any of these are not specified, the user of the current context is used.$sentinal$Possible values: * A valid password for the user specified by cinder_store_user_name$sentinal$ Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name
cinder_store_project_name = None (String) Project name where the image volume is stored in cinder.$sentinal$If this configuration option is not set, the project in current context is used.$sentinal$This must be used with all the following related options. If any of these are not specified, the project of the current context is used.$sentinal$Possible values: * A valid project name$sentinal$Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_password
cinder_store_user_name = None (String) User name to authenticate against cinder.$sentinal$This must be used with all the following related options. If any of these are not specified, the user of the current context is used.$sentinal$Possible values: * A valid user name$sentinal$Related options: * cinder_store_auth_address * cinder_store_password * cinder_store_project_name
Description of filesystem configuration options
Configuration option = Default value Description
[glance_store]  
filesystem_store_datadir = /var/lib/glance/images (String) Directory to which the filesystem backend store writes images.$sentinal$Upon start up, Glance creates the directory if it doesn’t already exist and verifies write access to the user under which glance-api runs. If the write access isn’t available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images.$sentinal$NOTE: This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf. If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images.$sentinal$Possible values: * A valid path to a directory$sentinal$Related options: * filesystem_store_datadirs * filesystem_store_file_perm
filesystem_store_datadirs = None (Multi-valued) List of directories and their priorities to which the filesystem backend store writes images.$sentinal$The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero.$sentinal$More information on configuring filesystem store with multiple store directories can be found at http://docs.openstack.org/developer/glance/configuring.html$sentinal$NOTE: This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf. If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images.$sentinal$Possible values: * List of strings of the following form: * <a valid directory path>:<optional integer priority>``$sentinal$Related options: * ``filesystem_store_datadir * filesystem_store_file_perm
filesystem_store_file_perm = 0 (Integer) File access permissions for the image files.$sentinal$Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit.$sentinal$For more information, please refer the documentation at http://docs.openstack.org/developer/glance/configuring.html$sentinal$Possible values: * A valid file access permission * Zero * Any negative integer$sentinal$Related options: * None
filesystem_store_metadata_file = None (String) Filesystem store metadata file.$sentinal$The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint. The value for both keys should be a string.$sentinal$Possible values: * A valid path to the store metadata file$sentinal$Related options: * None
Description of HTTP configuration options
Configuration option = Default value Description
[glance_store]  
http_proxy_information = {} (Dict) The http/https proxy information to be used to connect to the remote server.$sentinal$This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080.$sentinal$Possible values: * A comma separated list of scheme:proxy pairs as described above$sentinal$Related options: * None
https_ca_certificates_file = None (String) Path to the CA bundle file.$sentinal$This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server.$sentinal$Possible values: * A valid path to a CA file$sentinal$Related options: * https_insecure
https_insecure = True (Boolean) Set verification of the remote server certificate.$sentinal$This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification.$sentinal$This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option.$sentinal$Possible values: * True * False$sentinal$Related options: * https_ca_certificates_file
Description of RADOS Block Devices (RBD) configuration options
Configuration option = Default value Description
[glance_store]  
rados_connect_timeout = 0 (Integer) Timeout value for connecting to Ceph cluster.$sentinal$This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used.$sentinal$Possible Values: * Any integer value$sentinal$Related options: * None
rbd_store_ceph_conf = /etc/ceph/ceph.conf (String) Ceph configuration file path.$sentinal$This configuration option takes in the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to None, librados will locate the default configuration file which is located at /etc/ceph/ceph.conf. If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section$sentinal$Possible Values: * A valid path to a configuration file$sentinal$Related options: * rbd_store_user
rbd_store_chunk_size = 8 (Integer) Size, in megabytes, to chunk RADOS images into.$sentinal$Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two.$sentinal$When Ceph’s RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance.$sentinal$Possible Values: * Any positive integer value$sentinal$Related options: * None
rbd_store_pool = images (String) RADOS pool in which images are stored.$sentinal$When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool. Each pool is defined with the number of placement groups it can contain. The default pool that is used is ‘images’.$sentinal$More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/$sentinal$Possible Values: * A valid pool name$sentinal$Related options: * None
rbd_store_user = None (String) RADOS user to authenticate as.$sentinal$This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf.$sentinal$Possible Values: * A valid RADOS user$sentinal$Related options: * rbd_store_ceph_conf
Description of Sheepdog configuration options
Configuration option = Default value Description
[glance_store]  
sheepdog_store_address = 127.0.0.1 (String) Address to bind the Sheepdog daemon to.$sentinal$Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the ‘sheep’ is 127.0.0.1.$sentinal$The Sheepdog daemon, also called ‘sheep’, manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images.$sentinal$Possible values: * A valid IPv4 address * A valid IPv6 address * A valid hostname$sentinal$Related Options: * sheepdog_store_port
sheepdog_store_chunk_size = 64 (Integer) Chunk size for images to be stored in Sheepdog data store.$sentinal$Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes.$sentinal$When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance.$sentinal$Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance.$sentinal$Possible values: * Positive integer value representing size in mebibytes.$sentinal$Related Options: * None
sheepdog_store_port = 7000 (Port number) Port number on which the sheep daemon will listen.$sentinal$Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000.$sentinal$The Sheepdog daemon, also called ‘sheep’, manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images.$sentinal$Possible values: * A valid port number (0 to 65535)$sentinal$Related Options: * sheepdog_store_address
Description of swift configuration options
Configuration option = Default value Description
[DEFAULT]  
default_swift_reference = ref1 (String) Reference to default Swift account/backing store parameters.$sentinal$Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ‘ref1’. This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added.$sentinal$Possible values: * A valid string value$sentinal$Related options: * None
swift_store_auth_address = None (String) The address where the Swift authentication service is listening.
swift_store_config_file = None (String) File containing the swift account(s) configurations.$sentinal$Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is diabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it helps avoid storage of credentials in the database.$sentinal$Possible values: * None * String value representing a vaid configuration file path$sentinal$Related options: * None
swift_store_key = None (String) Auth key for the user authenticating against the Swift authentication service.
swift_store_user = None (String) The user to authenticate against the Swift authentication service.
[glance_store]  
default_swift_reference = ref1 (String) Reference to default Swift account/backing store parameters.$sentinal$Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ‘ref1’. This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added.$sentinal$Possible values: * A valid string value$sentinal$Related options: * None
swift_store_admin_tenants = (List) List of tenants that will be granted admin access.$sentinal$This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list.$sentinal$Possible values: * A comma separated list of strings representing UUIDs of Keystone projects/tenants$sentinal$Related options: * None
swift_store_auth_address = None (String) DEPRECATED: The address where the Swift authentication service is listening. The option ‘auth_address’ in the Swift back-end configuration file is used instead.
swift_store_auth_insecure = False (Boolean) Set verification of the server certificate.$sentinal$This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won’t check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification.$sentinal$Possible values: * True * False$sentinal$Related options: * swift_store_cacert
swift_store_auth_version = 2 (String) DEPRECATED: Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. The option ‘auth_version’ in the Swift back-end configuration file is used instead.
swift_store_cacert = /etc/ssl/certs/ca-certificates.crt (String) Path to the CA bundle file.$sentinal$This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift.$sentinal$Possible values: * A valid path to a CA file$sentinal$Related options: * swift_store_auth_insecure
swift_store_config_file = None (String) Absolute path to the file containing the swift account(s) configurations.$sentinal$Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database.$sentinal$Possible values: * String value representing an absolute path on the glance-api node$sentinal$Related options: * None
swift_store_container = glance (String) Name of single container to store images/name prefix for multiple containers$sentinal$When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed.$sentinal$When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed).$sentinal$Example: if the seed is set to 3 and swift_store_container = glance, then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda. All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba.``$sentinal$Possible values: * If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account * If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of ``swift_store_multiple_containers_seed should be taken into account as well.$sentinal$Related options: * swift_store_multiple_containers_seed * swift_store_multi_tenant * swift_store_create_container_on_put
swift_store_create_container_on_put = False (Boolean) Create container, if it doesn’t already exist, when uploading image.$sentinal$At the time of uploading an image, if the corresponding container doesn’t exist, it will be created provided this configuration option is set to True. By default, it won’t be created. This behavior is applicable for both single and multiple containers mode.$sentinal$Possible values: * True * False$sentinal$Related options: * None
swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name (String) The URL endpoint to use for Swift backend storage.$sentinal$Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage.$sentinal$NOTE: The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL.$sentinal$Possible values: * String value representing a valid URL path up to a Swift container$sentinal$Related Options: * None
swift_store_endpoint_type = publicURL (String) Endpoint Type of Swift service.$sentinal$This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1.$sentinal$Possible values: * publicURL * adminURL * internalURL$sentinal$Related options: * swift_store_endpoint
swift_store_expire_soon_interval = 60 (Integer) Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire.$sentinal$Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly.$sentinal$Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration.$sentinal$Possible values: * Zero * Positive integer value$sentinal$Related Options: * None
swift_store_key = None (String) DEPRECATED: Auth key for the user authenticating against the Swift authentication service. The option ‘key’ in the Swift back-end configuration file is used to set the authentication key instead.
swift_store_large_object_chunk_size = 200 (Integer) The maximum size, in MB, of the segments when image data is segmented.$sentinal$When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail.$sentinal$For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB.$sentinal$Possible values: * A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration.$sentinal$Related options: * swift_store_large_object_size
swift_store_large_object_size = 5120 (Integer) The size threshold, in MB, after which Glance will start segmenting image data.$sentinal$Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to http://docs.openstack.org/developer/swift/overview_large_objects.html$sentinal$This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects.$sentinal$NOTE: This should be set by taking into account the large object limit enforced by the Swift cluster in consideration.$sentinal$Possible values: * A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration.$sentinal$Related options: * swift_store_large_object_chunk_size
swift_store_multi_tenant = False (Boolean) Store images in tenant’s Swift account.$sentinal$This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage$sentinal$Possible values: * True * False$sentinal$Related options: * None
swift_store_multiple_containers_seed = 0 (Integer) Seed indicating the number of containers to use for storing images.$sentinal$When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images.$sentinal$Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html$sentinal$NOTE: This is used only when swift_store_multi_tenant is disabled.$sentinal$Possible values: * A non-negative integer less than or equal to 32$sentinal$Related options: * swift_store_container * swift_store_multi_tenant * swift_store_create_container_on_put
swift_store_region = RegionTwo (String) The region of Swift endpoint to use by Glance.$sentinal$Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set.$sentinal$When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity.$sentinal$This option can be configured for both single-tenant and multi-tenant storage.$sentinal$NOTE: Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions.$sentinal$Possible values: * A string value representing a valid Swift region.$sentinal$Related Options: * None
swift_store_retry_get_count = 0 (Integer) The number of times a Swift download will be retried before the request fails.$sentinal$Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message.$sentinal$Possible values: * Zero * Positive integer value$sentinal$Related Options: * None
swift_store_service_type = object-store (String) Type of Swift service to use.$sentinal$Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store.$sentinal$NOTE: If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store. If using a higher version of Keystone or a different auth scheme, this option may be modified.$sentinal$Possible values: * A string representing a valid service type for Swift storage.$sentinal$Related Options: * None
swift_store_ssl_compression = True (Boolean) SSL layer compression for HTTPS Swift requests.$sentinal$Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled.$sentinal$When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2.$sentinal$Possible values: * True * False$sentinal$Related Options: * None
swift_store_use_trusts = True (Boolean) Use trusts for multi-tenant Swift store.$sentinal$This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data.$sentinal$By default, swift_store_use_trusts is set to True``(use of trusts is enabled). If set to ``False, a user token is used for the Swift connection instead, eliminating the overhead of trust creation.$sentinal$NOTE: This option is considered only when swift_store_multi_tenant is set to True $sentinal$Possible values: * True * False$sentinal$Related options: * swift_store_multi_tenant
swift_store_user = None (String) DEPRECATED: The user to authenticate against the Swift authentication service. The option ‘user’ in the Swift back-end configuration file is set instead.
Configure vCenter data stores for the Image service back end

To use vCenter data stores for the Image service back end, you must update the glance-api.conf file, as follows:

  • Add data store parameters to the VMware Datastore Store Options section.

  • Specify vSphere as the back end.

    Note

    You must configure any configured Image service data stores for the Compute service.

You can specify vCenter data stores directly by using the data store name or Storage Policy Based Management (SPBM), which requires vCenter Server 5.5 or later. For details, see Configure vCenter data stores for the back end.

Note

If you intend to use multiple data stores for the back end, use the SPBM feature.

In the glance_store section, set the stores and default_store options to vsphere, as shown in this code sample:

[glance_store]
# List of stores enabled. Valid stores are: cinder, file, http, rbd,
# sheepdog, swift, vsphere (list value)
stores = file,http,vsphere
# Which back end scheme should Glance use by default is not specified
# in a request to add a new image to Glance? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
default_store = vsphere

The following table describes the parameters in the VMware Datastore Store Options section:

Description of VMware configuration options
Configuration option = Default value Description
[glance_store]  
vmware_api_retry_count = 10 (Integer) The number of VMware API retries.$sentinal$This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify ‘retry forever’.$sentinal$Possible Values: * Any positive integer value$sentinal$Related options: * None
vmware_ca_file = /etc/ssl/certs/ca-certificates.crt (String) Absolute path to the CA bundle file.$sentinal$This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate.$sentinal$If this option is set, the “vmware_insecure” option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server.$sentinal$Possible Values: * Any string that is a valid absolute path to a CA file$sentinal$Related options: * vmware_insecure
vmware_datastores = None (Multi-valued) The datastores where the image can be stored.$sentinal$This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ”:”. An optional weight may be given after the datastore name, separated again by ”:” to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>.$sentinal$When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected.$sentinal$Possible Values: * Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight>$sentinal$Related options: * None
vmware_insecure = False (Boolean) Set verification of the ESX/vCenter server certificate.$sentinal$This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification.$sentinal$This option is ignored if the “vmware_ca_file” option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the “vmware_ca_file” option .$sentinal$Possible Values: * True * False$sentinal$Related options: * vmware_ca_file
vmware_server_host = 127.0.0.1 (String) Address of the ESX/ESXi or vCenter Server target system.$sentinal$This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com).$sentinal$Possible Values: * A valid IPv4 or IPv6 address * A valid DNS name$sentinal$Related options: * vmware_server_username * vmware_server_password
vmware_server_password = vmware (String) Server password.$sentinal$This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend.$sentinal$Possible Values: * Any string that is a password corresponding to the username specified using the “vmware_server_username” option$sentinal$Related options: * vmware_server_host * vmware_server_username
vmware_server_username = root (String) Server username.$sentinal$This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend.$sentinal$Possible Values: * Any string that is the username for a user with appropriate privileges$sentinal$Related options: * vmware_server_host * vmware_server_password
vmware_store_image_dir = /openstack_glance (String) The directory where the glance images will be stored in the datastore.$sentinal$This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance.$sentinal$Possible Values: * Any string that is a valid path to a directory$sentinal$Related options: * None
vmware_task_poll_interval = 5 (Integer) Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server.$sentinal$This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call.$sentinal$Possible Values: * Any positive integer value$sentinal$Related options: * None

The following block of text shows a sample configuration:

# ============ VMware Datastore Store Options =====================
# ESX/ESXi or vCenter Server target system.
# The server value can be an IP address or a DNS name
# e.g. 127.0.0.1, 127.0.0.1:443, www.vmware-infra.com
vmware_server_host = 192.168.0.10

# Server username (string value)
vmware_server_username = ADMINISTRATOR

# Server password (string value)
vmware_server_password = password

# Inventory path to a datacenter (string value)
# Value optional when vmware_server_ip is an ESX/ESXi host: if specified
# should be `ha-datacenter`.
vmware_datacenter_path = DATACENTER

# Datastore associated with the datacenter (string value)
vmware_datastore_name = datastore1

# PBM service WSDL file location URL. e.g.
# file:///opt/SDK/spbm/wsdl/pbmService.wsdl Not setting this
# will disable storage policy based placement of images.
# (string value)
#vmware_pbm_wsdl_location =

# The PBM policy. If `pbm_wsdl_location` is set, a PBM policy needs
# to be specified. This policy will be used to select the datastore
# in which the images will be stored.
#vmware_pbm_policy =

# The interval used for polling remote tasks
# invoked on VMware ESX/VC server in seconds (integer value)
vmware_task_poll_interval = 5

# Absolute path of the folder containing the images in the datastore
# (string value)
vmware_store_image_dir = /openstack_glance

# Allow to perform insecure SSL requests to the target system (boolean value)
vmware_api_insecure = False
Configure vCenter data stores for the back end

You can specify a vCenter data store for the back end by setting the vmware_datastore_name parameter value to the vCenter name of the data store. This configuration limits the back end to a single data store.

If present, comment or delete the vmware_pbm_wsdl_location and vmware_pbm_policy parameters.

Uncomment and define the vmware_datastore_name parameter with the name of the vCenter data store.

Complete the other vCenter configuration parameters as appropriate.

Additional configuration options for Image service

You can modify many options in the Image service. The following tables provide a comprehensive list.

Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
allow_additional_image_properties = True (Boolean) Allow users to add additional/custom properties to images.$sentinal$Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties. In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties.$sentinal$By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option.$sentinal$Possible values: * True * False$sentinal$Related options: * image_property_quota
api_limit_max = 1000 (Integer) Maximum number of results that could be returned by a request.$sentinal$As described in the help text of limit_param_default, some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can’t be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here.$sentinal$NOTE: Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience.$sentinal$Possible values: * Any positive integer$sentinal$Related options: * limit_param_default
backlog = 4096 (Integer) Set the number of incoming connection requests.$sentinal$Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096.$sentinal$An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic.$sentinal$Possible values: * Positive integer$sentinal$Related options: * None
bind_host = 0.0.0.0 (String) IP address to bind the glance servers to.$sentinal$Provide an IP address to bind the glance server to. The default value is 0.0.0.0.$sentinal$Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server.$sentinal$Possible values: * A valid IPv4 address * A valid IPv6 address$sentinal$Related options: * None
bind_port = None (Port number) Port number on which the server will listen.$sentinal$Provide a valid port number to bind the server’s socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191.$sentinal$Possible values: * A valid port number (0 to 65535)$sentinal$Related options: * None
data_api = glance.db.sqlalchemy.api (String) Python module path of data access API.$sentinal$Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed.$sentinal$Possible values: * glance.db.sqlalchemy.api * glance.db.registry.api * glance.db.simple.api$sentinal$If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs.$sentinal$Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability.$sentinal$NOTE: In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True.$sentinal$Finally, when this configuration option is set to glance.db.simple.api, image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing.$sentinal$Related options: * enable_v2_api * enable_v2_registry
digest_algorithm = sha256 (String) Digest algorithm to use for digital signature.$sentinal$Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used.$sentinal$To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms. Examples are ‘sha1’, ‘sha256’, and ‘sha512’.$sentinal$NOTE: digest_algorithm is not related to Glance’s image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation.$sentinal$Possible values: * An OpenSSL message digest algorithm identifier$sentinal$Relation options: * None
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
image_location_quota = 10 (Integer) Maximum number of locations allowed on an image.$sentinal$Any negative value is interpreted as unlimited.$sentinal$Related options: * None
image_member_quota = 128 (Integer) Maximum number of image members per image.$sentinal$This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited.$sentinal$Related options: * None
image_property_quota = 128 (Integer) Maximum number of properties allowed on an image.$sentinal$This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited.$sentinal$NOTE: This won’t have any impact if additional properties are disabled. Please refer to allow_additional_image_properties.$sentinal$Related options: * allow_additional_image_properties
image_tag_quota = 128 (Integer) Maximum number of tags allowed on an image.$sentinal$Any negative value is interpreted as unlimited.$sentinal$Related options: * None
limit_param_default = 25 (Integer) The default number of results to return for a request.$sentinal$Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request.$sentinal$NOTES: * The value of this configuration option may not be greater than the value specified by api_limit_max. * Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience.$sentinal$Possible values: * Any positive integer$sentinal$Related options: * api_limit_max
metadata_encryption_key = None (String) AES key for encrypting store location metadata.$sentinal$Provide a string value representing the AES cipher to use for encrypting Glance store metadata.$sentinal$NOTE: The AES key to use must be set to a random string of length 16, 24 or 32 bytes.$sentinal$Possible values: * String value representing a valid AES key$sentinal$Related options: * None
metadata_source_path = /etc/glance/metadefs/ (String) Absolute path to the directory where JSON metadefs files are stored.$sentinal$Glance Metadata Definitions (“metadefs”) are served from the database, but are stored in files in the JSON format. The files in this directory are used to initialize the metadefs in the database. Additionally, when metadefs are exported from the database, the files are written to this directory.$sentinal$NOTE: If you plan to export metadefs, make sure that this directory has write permissions set for the user being used to run the glance-api service.$sentinal$Possible values: * String value representing a valid absolute pathname$sentinal$Related options: * None
property_protection_file = None (String) The location of the property protection file.$sentinal$Provide a valid path to the property protection file which contains the rules for property protections and the roles/policies associated with them.$sentinal$A property protection file, when set, restricts the Glance image properties to be created, read, updated and/or deleted by a specific set of users that are identified by either roles or policies. If this configuration option is not set, by default, property protections won’t be enforced. If a value is specified and the file is not found, the glance-api service will fail to start. More information on property protections can be found at: http://docs.openstack.org/developer/glance/property-protections.html$sentinal$Possible values: * Empty string * Valid path to the property protection configuration file$sentinal$Related options: * property_protection_rule_format
property_protection_rule_format = roles (String) Rule format for property protection.$sentinal$Provide the desired way to set property protection on Glance image properties. The two permissible values are roles and policies. The default value is roles.$sentinal$If the value is roles, the property protection file must contain a comma separated list of user roles indicating permissions for each of the CRUD operations on each property being protected. If set to policies, a policy defined in policy.json is used to express property protections for each of the CRUD operations. Examples of how property protections are enforced based on roles or policies can be found at: http://docs.openstack.org/developer/glance/property-protections.html#examples$sentinal$Possible values: * roles * policies$sentinal$Related options: * property_protection_file
show_image_direct_url = False (Boolean) Show direct image location when returning an image.$sentinal$This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url.$sentinal$When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy.$sentinal$NOTES: * Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! * If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False.$sentinal$Possible values: * True * False$sentinal$Related options: * show_multiple_locations * location_strategy
user_storage_quota = 0 (String) Maximum amount of image storage per tenant.$sentinal$This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit.$sentinal$The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B, KB, MB, GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors.$sentinal$Possible values: * A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above.$sentinal$Related options: * None
workers = None (Integer) Number of Glance worker processes to start.$sentinal$Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for workers.$sentinal$Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000.$sentinal$NOTE: Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000.$sentinal$Possible values: * 0 * Positive integer value (typically equal to the number of CPUs)$sentinal$Related options: * None
[glance_store]  
rootwrap_config = /etc/glance/rootwrap.conf (String) Path to the rootwrap configuration file to use for running commands as root.$sentinal$The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library.$sentinal$Possible values: * Path to the rootwrap config file$sentinal$Related options: * None
[image_format]  
container_formats = ami, ari, aki, bare, ovf, ova, docker (List) Supported values for the ‘container_format’ image attribute
disk_formats = ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso (List) Supported values for the ‘disk_format’ image attribute
[task]  
task_executor = taskflow (String) Task executor to be used to run task scripts.$sentinal$Provide a string value representing the executor to use for task executions. By default, TaskFlow executor is used.$sentinal$``TaskFlow`` helps make task executions easy, consistent, scalable and reliable. It also enables creation of lightweight task objects and/or functions that are combined together into flows in a declarative manner.$sentinal$Possible values: * taskflow$sentinal$Related Options: * None
task_time_to_live = 48 (Integer) Time in hours for which a task lives after, either succeeding or failing
work_dir = /work_dir (String) Absolute path to the work directory to use for asynchronous task operations.$sentinal$The directory set here will be used to operate over images - normally before they are imported in the destination store.$sentinal$NOTE: When providing a value for work_dir, please make sure that enough space is provided for concurrent tasks to run efficiently without running out of space.$sentinal$A rough estimation can be done by multiplying the number of max_workers with an average image size (e.g 500MB). The image size estimation should be done based on the average size in your deployment. Note that depending on the tasks running you may need to multiply this number by some factor depending on what the task does. For example, you may want to double the available size if image conversion is enabled. All this being said, remember these are just estimations and you should do them based on the worst case scenario and be prepared to act in case they were wrong.$sentinal$Possible values: * String value representing the absolute path to the working directory$sentinal$Related Options: * None
Description of flagmappings configuration options
Configuration option = Default value Description
[DEFAULT]  
delayed_delete = False (Boolean) Turn on/off delayed delete.$sentinal$Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data.$sentinal$NOTE: When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage.$sentinal$Possible values: * True * False$sentinal$Related options: * scrub_time * wakeup_time * scrub_pool_size
image_cache_dir = None (String) Base directory for image cache.$sentinal$This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete, invalid and queue.$sentinal$The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory.$sentinal$The queue``subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the ``queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it’ll be retried the next time cache-prefetcher runs.$sentinal$Possible values: * A valid path$sentinal$Related options: * image_cache_sqlite_db
image_cache_driver = sqlite (String) The driver to use for image cache management.$sentinal$This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc.$sentinal$The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver. All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr. These drivers primarily differ in the way they store the information about cached images: * The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. * The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed.$sentinal$Possible values: * sqlite * xattr$sentinal$Related options: * None
image_cache_max_size = 10737418240 (Integer) The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache.$sentinal$NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit.$sentinal$Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * None
image_cache_sqlite_db = cache.db (String) The relative path to sqlite file database that will be used for image cache management.$sentinal$This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir.$sentinal$This is a lightweight database with just one table.$sentinal$Possible values: * A valid relative path to sqlite file database$sentinal$Related options: * image_cache_dir
image_cache_stall_time = 86400 (Integer) The amount of time, in seconds, an incomplete image remains in the cache.$sentinal$Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it’ll be removed by cache-cleaner on its next run.$sentinal$It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * None
scrub_pool_size = 1 (Integer) The size of thread pool to be used for scrubbing images.$sentinal$When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing.$sentinal$Possible values: * Any non-zero positive integer$sentinal$Related options: * delayed_delete
scrub_time = 0 (Integer) The amount of time, in seconds, to delay image scrubbing.$sentinal$When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing.$sentinal$It is important to realize that this has storage implications. The larger the scrub_time, the longer the time to reclaim backend storage from deleted images.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * delayed_delete
Description of profiler configuration options
Configuration option = Default value Description
[profiler]  
connection_string = messaging:// (String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging.$sentinal$Examples of possible values:$sentinal$* messaging://: use oslo_messaging driver for sending notifications.
enabled = False (Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature).$sentinal$Possible values:$sentinal$* True: Enables the feature$sentinal$* False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.
hmac_keys = SECRET_KEY (String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.$sentinal$Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.
trace_sqlalchemy = False (Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced).$sentinal$Possible values:$sentinal$* True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.$sentinal$* False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of registry configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_password = None (String) DEPRECATED: The administrators password. If “use_user_token” is not in effect, then admin credentials can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support.
admin_tenant_name = None (String) DEPRECATED: The tenant name of the administrative user. If “use_user_token” is not in effect, then admin tenant name can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support.
admin_user = None (String) DEPRECATED: The administrators user name. If “use_user_token” is not in effect, then admin credentials can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support.
auth_region = None (String) DEPRECATED: The region for the authentication service. If “use_user_token” is not in effect and using keystone auth, then region name can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support.
auth_strategy = noauth (String) DEPRECATED: The strategy to use for authentication. If “use_user_token” is not in effect, then auth strategy can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support.
auth_url = None (String) DEPRECATED: The URL to the keystone service. If “use_user_token” is not in effect and using keystone auth, then URL of keystone can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support.
registry_client_ca_file = /etc/ssl/cafile/file.ca (String) Absolute path to the Certificate Authority file.$sentinal$Provide a string value representing a valid absolute path to the certificate authority file to use for establishing a secure connection to the registry server.$sentinal$NOTE: This option must be set if registry_client_protocol is set to https. Alternatively, the GLANCE_CLIENT_CA_FILE environment variable may be set to a filepath of the CA file. This option is ignored if the registry_client_insecure option is set to True.$sentinal$Possible values: * String value representing a valid absolute path to the CA file.$sentinal$Related options: * registry_client_protocol * registry_client_insecure
registry_client_cert_file = /etc/ssl/certs/file.crt (String) Absolute path to the certificate file.$sentinal$Provide a string value representing a valid absolute path to the certificate file to use for establishing a secure connection to the registry server.$sentinal$NOTE: This option must be set if registry_client_protocol is set to https. Alternatively, the GLANCE_CLIENT_CERT_FILE environment variable may be set to a filepath of the certificate file.$sentinal$Possible values: * String value representing a valid absolute path to the certificate file.$sentinal$Related options: * registry_client_protocol
registry_client_insecure = False (Boolean) Set verification of the registry server certificate.$sentinal$Provide a boolean value to determine whether or not to validate SSL connections to the registry server. By default, this option is set to False and the SSL connections are validated.$sentinal$If set to True, the connection to the registry server is not validated via a certifying authority and the registry_client_ca_file option is ignored. This is the registry’s equivalent of specifying –insecure on the command line using glanceclient for the API.$sentinal$Possible values: * True * False$sentinal$Related options: * registry_client_protocol * registry_client_ca_file
registry_client_key_file = /etc/ssl/key/key-file.pem (String) Absolute path to the private key file.$sentinal$Provide a string value representing a valid absolute path to the private key file to use for establishing a secure connection to the registry server.$sentinal$NOTE: This option must be set if registry_client_protocol is set to https. Alternatively, the GLANCE_CLIENT_KEY_FILE environment variable may be set to a filepath of the key file.$sentinal$Possible values: * String value representing a valid absolute path to the key file.$sentinal$Related options: * registry_client_protocol
registry_client_protocol = http (String) Protocol to use for communication with the registry server.$sentinal$Provide a string value representing the protocol to use for communication with the registry server. By default, this option is set to http and the connection is not secure.$sentinal$This option can be set to https to establish a secure connection to the registry server. In this case, provide a key to use for the SSL connection using the registry_client_key_file option. Also include the CA file and cert file using the options registry_client_ca_file and registry_client_cert_file respectively.$sentinal$Possible values: * http * https$sentinal$Related options: * registry_client_key_file * registry_client_cert_file * registry_client_ca_file
registry_client_timeout = 600 (Integer) Timeout value for registry requests.$sentinal$Provide an integer value representing the period of time in seconds that the API server will wait for a registry request to complete. The default value is 600 seconds.$sentinal$A value of 0 implies that a request will never timeout.$sentinal$Possible values: * Zero * Positive integer$sentinal$Related options: * None
registry_host = 0.0.0.0 (String) Address the registry server is hosted on.$sentinal$Possible values: * A valid IP or hostname$sentinal$Related options: * None
registry_port = 9191 (Port number) Port the registry server is listening on.$sentinal$Possible values: * A valid port number$sentinal$Related options: * None
Description of replicator configuration options
Configuration option = Default value Description
[DEFAULT]  
args = None (Multi-valued) Arguments for the command
chunksize = 65536 (Integer) Amount of data to transfer per HTTP write.
command = None (String) Command to be given to replicator
dontreplicate = created_at date deleted_at location updated_at (String) List of fields to not replicate.
mastertoken = (String) Pass in your authentication token if you have one. This is the token used for the master.
metaonly = False (Boolean) Only replicate metadata, not images.
slavetoken = (String) Pass in your authentication token if you have one. This is the token used for the slave.
token = (String) Pass in your authentication token if you have one. If you use this option the same token is used for both the master and the slave.
Description of scrubber configuration options
Configuration option = Default value Description
[DEFAULT]  
wakeup_time = 300 (Integer) Time interval, in seconds, between scrubber runs in daemon mode.$sentinal$Scrubber can be run either as a cron job or daemon. When run as a daemon, this configuration time specifies the time period between two runs. When the scrubber wakes up, it fetches and scrubs all pending_delete images that are available for scrubbing after taking scrub_time into consideration.$sentinal$If the wakeup time is set to a large number, there may be a large number of images to be scrubbed for each run. Also, this impacts how quickly the backend storage is reclaimed.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * daemon * delayed_delete
Description of TaskFlow configuration options
Configuration option = Default value Description
[taskflow_executor]  
conversion_format = raw (String) Set the desired image conversion format.$sentinal$Provide a valid image format to which you want images to be converted before they are stored for consumption by Glance. Appropriate image format conversions are desirable for specific storage backends in order to facilitate efficient handling of bandwidth and usage of the storage infrastructure.$sentinal$By default, conversion_format is not set and must be set explicitly in the configuration file.$sentinal$The allowed values for this option are raw, qcow2 and vmdk. The raw format is the unstructured disk format and should be chosen when RBD or Ceph storage backends are used for image storage. qcow2 is supported by the QEMU emulator that expands dynamically and supports Copy on Write. The vmdk is another common disk format supported by many common virtual machine monitors like VMWare Workstation.$sentinal$Possible values: * qcow2 * raw * vmdk$sentinal$Related options: * disk_formats
engine_mode = parallel (String) Set the taskflow engine mode.$sentinal$Provide a string type value to set the mode in which the taskflow engine would schedule tasks to the workers on the hosts. Based on this mode, the engine executes tasks either in single or multiple threads. The possible values for this configuration option are: serial and parallel. When set to serial, the engine runs all the tasks in a single thread which results in serial execution of tasks. Setting this to parallel makes the engine run tasks in multiple threads. This results in parallel execution of tasks.$sentinal$Possible values: * serial * parallel$sentinal$Related options: * max_workers
max_workers = 10 (Integer) Set the number of engine executable tasks.$sentinal$Provide an integer value to limit the number of workers that can be instantiated on the hosts. In other words, this number defines the number of parallel tasks that can be executed at the same time by the taskflow engine. This value can be greater than one when the engine mode is set to parallel.$sentinal$Possible values: * Integer value greater than or equal to 1$sentinal$Related options: * engine_mode
Description of testing configuration options
Configuration option = Default value Description
[DEFAULT]  
pydev_worker_debug_host = localhost (String) Host address of the pydev server.$sentinal$Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance.$sentinal$Possible values: * Valid hostname * Valid IP address$sentinal$Related options: * None
pydev_worker_debug_port = 5678 (Port number) Port number that the pydev server will listen on.$sentinal$Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance.$sentinal$Possible values: * A valid port number$sentinal$Related options: * None

Image log files

The corresponding log file of each Image service is stored in the /var/log/glance/ directory of the host on which each service runs.

Log files used by Image services
Log filename Service that logs to the file
api.log Image service API server
registry.log Image service Registry server

Image service sample configuration files

You can find the files that are described in this section in the /etc/glance/ directory.

glance-api.conf

The configuration file for the Image service API is found in the glance-api.conf file.

This file must be modified after installation.

[DEFAULT]

#
# From glance.api
#

#
# Set the image owner to tenant or the authenticated user.
#
# Assign a boolean value to determine the owner of an image. When set to
# True, the owner of the image is the tenant. When set to False, the
# owner of the image will be the authenticated user issuing the request.
# Setting it to False makes the image private to the associated user and
# sharing with other users within the same tenant (or "project")
# requires explicit image sharing via image membership.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#owner_is_tenant = true

#
# Role used to identify an authenticated user as administrator.
#
# Provide a string value representing a Keystone role to identify an
# administrative user. Users with this role will be granted
# administrative privileges. The default value for this option is
# 'admin'.
#
# Possible values:
#     * A string value which is a valid Keystone role
#
# Related options:
#     * None
#
#  (string value)
#admin_role = admin

#
# Allow limited access to unauthenticated users.
#
# Assign a boolean to determine API access for unathenticated
# users. When set to False, the API cannot be accessed by
# unauthenticated users. When set to True, unauthenticated users can
# access the API with read-only privileges. This however only applies
# when using ContextMiddleware.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#allow_anonymous_access = false

#
# Limit the request ID length.
#
# Provide  an integer value to limit the length of the request ID to
# the specified length. The default value is 64. Users can change this
# to any ineteger value between 0 and 16384 however keeping in mind that
# a larger value may flood the logs.
#
# Possible values:
#     * Integer value between 0 and 16384
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#max_request_id_length = 64

#
# Public url endpoint to use for Glance/Glare versions response.
#
# This is the public url endpoint that will appear in the Glance/Glare
# "versions" response. If no value is specified, the endpoint that is
# displayed in the version's response is that of the host running the
# API service. Change the endpoint to represent the proxy URL if the
# API service is running behind a proxy. If the service is running
# behind a load balancer, add the load balancer's URL for this value.
#
# Possible values:
#     * None
#     * Proxy URL
#     * Load balancer URL
#
# Related options:
#     * None
#
#  (string value)
#public_endpoint = <None>

#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * image_property_quota
#
#  (boolean value)
#allow_additional_image_properties = true

#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_member_quota = 128

#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
#     * ``allow_additional_image_properties``
#
#  (integer value)
#image_property_quota = 128

#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_tag_quota = 128

#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_location_quota = 10

#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
#     * glance.db.sqlalchemy.api
#     * glance.db.registry.api
#     * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
#     * enable_v2_api
#     * enable_v2_registry
#
#  (string value)
#data_api = glance.db.sqlalchemy.api

#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
#     * The value of this configuration option may not be greater than
#       the value specified by ``api_limit_max``.
#     * Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * api_limit_max
#
#  (integer value)
# Minimum value: 1
#limit_param_default = 25

#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * limit_param_default
#
#  (integer value)
# Minimum value: 1
#api_limit_max = 1000

#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_multiple_locations
#     * location_strategy
#
#  (boolean value)
#show_image_direct_url = false

# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_image_direct_url
#     * location_strategy
#
#  (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false

#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
#     * This value should only be increased after careful
#       consideration and must be set less than or equal to
#       8 EiB (9223372036854775808).
#     * This value must be set with careful consideration of the
#       backend storage capacity. Setting this to a very low value
#       may result in a large number of image failures. And, setting
#       this to a very large value may result in faster consumption
#       of storage. Hence, this must be set according to the nature of
#       images created and storage capacity available.
#
# Possible values:
#     * Any positive number less than or equal to 9223372036854775808
#
#  (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776

#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
#     * A string that is a valid concatenation of a non-negative integer
#       representing the storage value and an optional string literal
#       representing storage units as mentioned above.
#
# Related options:
#     * None
#
#  (string value)
#user_storage_quota = 0

#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
#     * If this option is enabled, then ``enable_v1_registry`` must
#       also be set to ``True`` to enable mandatory usage of Registry
#       service with v1 API.
#
#     * If this option is disabled, then the ``enable_v1_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v2_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v2 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_registry
#     * enable_v2_api
#
#  (boolean value)
#enable_v1_api = true

#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
#     * If this option is disabled, then the ``enable_v2_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v1_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v1 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_registry
#     * enable_v1_api
#
#  (boolean value)
#enable_v2_api = true

#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
#     * Use of Registry is mandatory in v1 API, so this option must
#       be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
#     * If deploying only the v2 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_api
#
#  (boolean value)
#enable_v1_registry = true

#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
#     * Use of Registry is optional in v2 API, so this option
#       must only be enabled if both ``enable_v2_api`` is set to
#       ``True`` and the ``data_api`` option is set to
#       ``glance.db.registry.api``.
#
#     * If deploying only the v1 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_api
#     * data_api
#
#  (boolean value)
#enable_v2_registry = true

#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
#     * Valid hostname
#     * Valid IP address
#
# Related options:
#     * None
#
#  (string value)
#pydev_worker_debug_host = localhost

#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
#     * A valid port number
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678

#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
#     * String value representing a valid AES key
#
# Related options:
#     * None
#
#  (string value)
#metadata_encryption_key = <None>

#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
#     * An OpenSSL message digest algorithm identifier
#
# Relation options:
#     * None
#
#  (string value)
#digest_algorithm = sha256

#
# Strategy to determine the preference order of image locations.
#
# This configuration option indicates the strategy to determine
# the order in which an image's locations must be accessed to
# serve the image's data. Glance then retrieves the image data
# from the first responsive active location it finds in this list.
#
# This option takes one of two possible values ``location_order``
# and ``store_type``. The default value is ``location_order``,
# which suggests that image data be served by using locations in
# the order they are stored in Glance. The ``store_type`` value
# sets the image location preference based on the order in which
# the storage backends are listed as a comma separated list for
# the configuration option ``store_type_preference``.
#
# Possible values:
#     * location_order
#     * store_type
#
# Related options:
#     * store_type_preference
#
#  (string value)
# Allowed values: location_order, store_type
#location_strategy = location_order

#
# The location of the property protection file.
#
# Provide a valid path to the property protection file which contains
# the rules for property protections and the roles/policies associated
# with them.
#
# A property protection file, when set, restricts the Glance image
# properties to be created, read, updated and/or deleted by a specific
# set of users that are identified by either roles or policies.
# If this configuration option is not set, by default, property
# protections won't be enforced. If a value is specified and the file
# is not found, the glance-api service will fail to start.
# More information on property protections can be found at:
# http://docs.openstack.org/developer/glance/property-protections.html
#
# Possible values:
#     * Empty string
#     * Valid path to the property protection configuration file
#
# Related options:
#     * property_protection_rule_format
#
#  (string value)
#property_protection_file = <None>

#
# Rule format for property protection.
#
# Provide the desired way to set property protection on Glance
# image properties. The two permissible values are ``roles``
# and ``policies``. The default value is ``roles``.
#
# If the value is ``roles``, the property protection file must
# contain a comma separated list of user roles indicating
# permissions for each of the CRUD operations on each property
# being protected. If set to ``policies``, a policy defined in
# policy.json is used to express property protections for each
# of the CRUD operations. Examples of how property protections
# are enforced based on ``roles`` or ``policies`` can be found at:
# http://docs.openstack.org/developer/glance/property-protections.html#examples
#
# Possible values:
#     * roles
#     * policies
#
# Related options:
#     * property_protection_file
#
#  (string value)
# Allowed values: roles, policies
#property_protection_rule_format = roles

#
# List of allowed exception modules to handle RPC exceptions.
#
# Provide a comma separated list of modules whose exceptions are
# permitted to be recreated upon receiving exception data via an RPC
# call made to Glance. The default list includes
# ``glance.common.exception``, ``builtins``, and ``exceptions``.
#
# The RPC protocol permits interaction with Glance via calls across a
# network or within the same system. Including a list of exception
# namespaces with this option enables RPC to propagate the exceptions
# back to the users.
#
# Possible values:
#     * A comma separated list of valid exception modules
#
# Related options:
#     * None
#  (list value)
#allowed_rpc_exception_modules = glance.common.exception,builtins,exceptions

#
# IP address to bind the glance servers to.
#
# Provide an IP address to bind the glance server to. The default
# value is ``0.0.0.0``.
#
# Edit this option to enable the server to listen on one particular
# IP address on the network card. This facilitates selection of a
# particular network interface for the server.
#
# Possible values:
#     * A valid IPv4 address
#     * A valid IPv6 address
#
# Related options:
#     * None
#
#  (string value)
#bind_host = 0.0.0.0

#
# Port number on which the server will listen.
#
# Provide a valid port number to bind the server's socket to. This
# port is then set to identify processes and forward network messages
# that arrive at the server. The default bind_port value for the API
# server is 9292 and for the registry server is 9191.
#
# Possible values:
#     * A valid port number (0 to 65535)
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#bind_port = <None>

#
# Number of Glance worker processes to start.
#
# Provide a non-negative integer value to set the number of child
# process workers to service requests. By default, the number of CPUs
# available is set as the value for ``workers``.
#
# Each worker process is made to listen on the port set in the
# configuration file and contains a greenthread pool of size 1000.
#
# NOTE: Setting the number of workers to zero, triggers the creation
# of a single API process with a greenthread pool of size 1000.
#
# Possible values:
#     * 0
#     * Positive integer value (typically equal to the number of CPUs)
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#workers = <None>

#
# Maximum line size of message headers.
#
# Provide an integer value representing a length to limit the size of
# message headers. The default value is 16384.
#
# NOTE: ``max_header_line`` may need to be increased when using large
# tokens (typically those generated by the Keystone v3 API with big
# service catalogs). However, it is to be kept in mind that larger
# values for ``max_header_line`` would flood the logs.
#
# Setting ``max_header_line`` to 0 sets no limit for the line size of
# message headers.
#
# Possible values:
#     * 0
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#max_header_line = 16384

#
# Set keep alive option for HTTP over TCP.
#
# Provide a boolean value to determine sending of keep alive packets.
# If set to ``False``, the server returns the header
# "Connection: close". If set to ``True``, the server returns a
# "Connection: Keep-Alive" in its responses. This enables retention of
# the same TCP connection for HTTP conversations instead of opening a
# new one with each new request.
#
# This option must be set to ``False`` if the client socket connection
# needs to be closed explicitly after the response is received and
# read successfully by the client.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#http_keepalive = true

#
# Timeout for client connections' socket operations.
#
# Provide a valid integer value representing time in seconds to set
# the period of wait before an incoming connection can be closed. The
# default value is 900 seconds.
#
# The value zero implies wait forever.
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#client_socket_timeout = 900

#
# Set the number of incoming connection requests.
#
# Provide a positive integer value to limit the number of requests in
# the backlog queue. The default queue size is 4096.
#
# An incoming connection to a TCP listener socket is queued before a
# connection can be established with the server. Setting the backlog
# for a TCP socket ensures a limited queue size for incoming traffic.
#
# Possible values:
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#backlog = 4096

#
# Set the wait time before a connection recheck.
#
# Provide a positive integer value representing time in seconds which
# is set as the idle wait time before a TCP keep alive packet can be
# sent to the host. The default value is 600 seconds.
#
# Setting ``tcp_keepidle`` helps verify at regular intervals that a
# connection is intact and prevents frequent TCP connection
# reestablishment.
#
# Possible values:
#     * Positive integer value representing time in seconds
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#tcp_keepidle = 600

#
# Absolute path to the CA file.
#
# Provide a string value representing a valid absolute path to
# the Certificate Authority file to use for client authentication.
#
# A CA file typically contains necessary trusted certificates to
# use for the client authentication. This is essential to ensure
# that a secure connection is established to the server via the
# internet.
#
# Possible values:
#     * Valid absolute path to the CA file
#
# Related options:
#     * None
#
#  (string value)
#ca_file = /etc/ssl/cafile

#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file which is required to start the API service
# securely.
#
# A certificate file typically is a public key container and includes
# the server's public key, server name, server information and the
# signature which was a result of the verification process using the
# CA certificate. This is required for a secure connection
# establishment.
#
# Possible values:
#     * Valid absolute path to the certificate file
#
# Related options:
#     * None
#
#  (string value)
#cert_file = /etc/ssl/certs

#
# Absolute path to a private key file.
#
# Provide a string value representing a valid absolute path to a
# private key file which is required to establish the client-server
# connection.
#
# Possible values:
#     * Absolute path to the private key file
#
# Related options:
#     * None
#
#  (string value)
#key_file = /etc/ssl/key/key-file.pem

# DEPRECATED: The HTTP header used to determine the scheme for the original
# request, even if it was removed by an SSL terminating proxy. Typical value is
# "HTTP_X_FORWARDED_PROTO". (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use the http_proxy_to_wsgi middleware instead.
#secure_proxy_ssl_header = <None>

#
# The relative path to sqlite file database that will be used for image cache
# management.
#
# This is a relative path to the sqlite file database that tracks the age and
# usage statistics of image cache. The path is relative to image cache base
# directory, specified by the configuration option ``image_cache_dir``.
#
# This is a lightweight database with just one table.
#
# Possible values:
#     * A valid relative path to sqlite file database
#
# Related options:
#     * ``image_cache_dir``
#
#  (string value)
#image_cache_sqlite_db = cache.db

#
# The driver to use for image cache management.
#
# This configuration option provides the flexibility to choose between the
# different image-cache drivers available. An image-cache driver is responsible
# for providing the essential functions of image-cache like write images to/read
# images from cache, track age and usage of cached images, provide a list of
# cached images, fetch size of the cache, queue images for caching and clean up
# the cache, etc.
#
# The essential functions of a driver are defined in the base class
# ``glance.image_cache.drivers.base.Driver``. All image-cache drivers (existing
# and prospective) must implement this interface. Currently available drivers
# are ``sqlite`` and ``xattr``. These drivers primarily differ in the way they
# store the information about cached images:
#     * The ``sqlite`` driver uses a sqlite database (which sits on every glance
#     node locally) to track the usage of cached images.
#     * The ``xattr`` driver uses the extended attributes of files to store this
#     information. It also requires a filesystem that sets ``atime`` on the
# files
#     when accessed.
#
# Possible values:
#     * sqlite
#     * xattr
#
# Related options:
#     * None
#
#  (string value)
# Allowed values: sqlite, xattr
#image_cache_driver = sqlite

#
# The upper limit on cache size, in bytes, after which the cache-pruner cleans
# up the image cache.
#
# NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a
# hard limit beyond which the image cache would never grow. In fact, depending
# on how often the cache-pruner runs and how quickly the cache fills, the image
# cache can far exceed the size specified here very easily. Hence, care must be
# taken to appropriately schedule the cache-pruner and in setting this limit.
#
# Glance caches an image when it is downloaded. Consequently, the size of the
# image cache grows over time as the number of downloads increases. To keep the
# cache size from becoming unmanageable, it is recommended to run the
# cache-pruner as a periodic task. When the cache pruner is kicked off, it
# compares the current size of image cache and triggers a cleanup if the image
# cache grew beyond the size specified here. After the cleanup, the size of
# cache is less than or equal to size specified here.
#
# Possible values:
#     * Any non-negative integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#image_cache_max_size = 10737418240

#
# The amount of time, in seconds, an incomplete image remains in the cache.
#
# Incomplete images are images for which download is in progress. Please see the
# description of configuration option ``image_cache_dir`` for more detail.
# Sometimes, due to various reasons, it is possible the download may hang and
# the incompletely downloaded image remains in the ``incomplete`` directory.
# This configuration option sets a time limit on how long the incomplete images
# should remain in the ``incomplete`` directory before they are cleaned up.
# Once an incomplete image spends more time than is specified here, it'll be
# removed by cache-cleaner on its next run.
#
# It is recommended to run cache-cleaner as a periodic task on the Glance API
# nodes to keep the incomplete images from occupying disk space.
#
# Possible values:
#     * Any non-negative integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#image_cache_stall_time = 86400

#
# Base directory for image cache.
#
# This is the location where image data is cached and served out of. All cached
# images are stored directly under this directory. This directory also contains
# three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``.
#
# The ``incomplete`` subdirectory is the staging area for downloading images. An
# image is first downloaded to this directory. When the image download is
# successful it is moved to the base directory. However, if the download fails,
# the partially downloaded image file is moved to the ``invalid`` subdirectory.
#
# The ``queue``subdirectory is used for queuing images for download. This is
# used primarily by the cache-prefetcher, which can be scheduled as a periodic
# task like cache-pruner and cache-cleaner, to cache images ahead of their
# usage.
# Upon receiving the request to cache an image, Glance touches a file in the
# ``queue`` directory with the image id as the file name. The cache-prefetcher,
# when running, polls for the files in ``queue`` directory and starts
# downloading them in the order they were created. When the download is
# successful, the zero-sized file is deleted from the ``queue`` directory.
# If the download fails, the zero-sized file remains and it'll be retried the
# next time cache-prefetcher runs.
#
# Possible values:
#     * A valid path
#
# Related options:
#     * ``image_cache_sqlite_db``
#
#  (string value)
#image_cache_dir = <None>

#
# Default publisher_id for outgoing Glance notifications.
#
# This is the value that the notification driver will use to identify
# messages for events originating from the Glance service. Typically,
# this is the hostname of the instance that generated the message.
#
# Possible values:
#     * Any reasonable instance identifier, for example: image.host1
#
# Related options:
#     * None
#
#  (string value)
#default_publisher_id = image.localhost

#
# List of notifications to be disabled.
#
# Specify a list of notifications that should not be emitted.
# A notification can be given either as a notification type to
# disable a single event notification, or as a notification group
# prefix to disable all event notifications within a group.
#
# Possible values:
#     A comma-separated list of individual notification types or
#     notification groups to be disabled. Currently supported groups:
#         * image
#         * image.member
#         * task
#         * metadef_namespace
#         * metadef_object
#         * metadef_property
#         * metadef_resource_type
#         * metadef_tag
#     For a complete listing and description of each event refer to:
#     http://docs.openstack.org/developer/glance/notifications.html
#
#     The values must be specified as: <group_name>.<event_name>
#     For example: image.create,task.success,metadef_tag
#
# Related options:
#     * None
#
#  (list value)
#disabled_notifications =

#
# Address the registry server is hosted on.
#
# Possible values:
#     * A valid IP or hostname
#
# Related options:
#     * None
#
#  (string value)
#registry_host = 0.0.0.0

#
# Port the registry server is listening on.
#
# Possible values:
#     * A valid port number
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#registry_port = 9191

# DEPRECATED: Whether to pass through the user token when making requests to the
# registry. To prevent failures with token expiration during big files upload,
# it is recommended to set this parameter to False.If "use_user_token" is not in
# effect, then admin credentials can be specified. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#use_user_token = true

# DEPRECATED: The administrators user name. If "use_user_token" is not in
# effect, then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_user = <None>

# DEPRECATED: The administrators password. If "use_user_token" is not in effect,
# then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_password = <None>

# DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
# not in effect, then admin tenant name can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_tenant_name = <None>

# DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
# effect and using keystone auth, then URL of keystone can be specified. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_url = <None>

# DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
# in effect, then auth strategy can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_strategy = noauth

# DEPRECATED: The region for the authentication service. If "use_user_token" is
# not in effect and using keystone auth, then region name can be specified.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_region = <None>

#
# Protocol to use for communication with the registry server.
#
# Provide a string value representing the protocol to use for
# communication with the registry server. By default, this option is
# set to ``http`` and the connection is not secure.
#
# This option can be set to ``https`` to establish a secure connection
# to the registry server. In this case, provide a key to use for the
# SSL connection using the ``registry_client_key_file`` option. Also
# include the CA file and cert file using the options
# ``registry_client_ca_file`` and ``registry_client_cert_file``
# respectively.
#
# Possible values:
#     * http
#     * https
#
# Related options:
#     * registry_client_key_file
#     * registry_client_cert_file
#     * registry_client_ca_file
#
#  (string value)
# Allowed values: http, https
#registry_client_protocol = http

#
# Absolute path to the private key file.
#
# Provide a string value representing a valid absolute path to the
# private key file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
# environment variable may be set to a filepath of the key file.
#
# Possible values:
#     * String value representing a valid absolute path to the key
#       file.
#
# Related options:
#     * registry_client_protocol
#
#  (string value)
#registry_client_key_file = /etc/ssl/key/key-file.pem

#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
# environment variable may be set to a filepath of the certificate
# file.
#
# Possible values:
#     * String value representing a valid absolute path to the
#       certificate file.
#
# Related options:
#     * registry_client_protocol
#
#  (string value)
#registry_client_cert_file = /etc/ssl/certs/file.crt

#
# Absolute path to the Certificate Authority file.
#
# Provide a string value representing a valid absolute path to the
# certificate authority file to use for establishing a secure
# connection to the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
# environment variable may be set to a filepath of the CA file.
# This option is ignored if the ``registry_client_insecure`` option
# is set to ``True``.
#
# Possible values:
#     * String value representing a valid absolute path to the CA
#       file.
#
# Related options:
#     * registry_client_protocol
#     * registry_client_insecure
#
#  (string value)
#registry_client_ca_file = /etc/ssl/cafile/file.ca

#
# Set verification of the registry server certificate.
#
# Provide a boolean value to determine whether or not to validate
# SSL connections to the registry server. By default, this option
# is set to ``False`` and the SSL connections are validated.
#
# If set to ``True``, the connection to the registry server is not
# validated via a certifying authority and the
# ``registry_client_ca_file`` option is ignored. This is the
# registry's equivalent of specifying --insecure on the command line
# using glanceclient for the API.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * registry_client_protocol
#     * registry_client_ca_file
#
#  (boolean value)
#registry_client_insecure = false

#
# Timeout value for registry requests.
#
# Provide an integer value representing the period of time in seconds
# that the API server will wait for a registry request to complete.
# The default value is 600 seconds.
#
# A value of 0 implies that a request will never timeout.
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#registry_client_timeout = 600

#
# Send headers received from identity when making requests to
# registry.
#
# Typically, Glance registry can be deployed in multiple flavors,
# which may or may not include authentication. For example,
# ``trusted-auth`` is a flavor that does not require the registry
# service to authenticate the requests it receives. However, the
# registry service may still need a user context to be populated to
# serve the requests. This can be achieved by the caller
# (the Glance API usually) passing through the headers it received
# from authenticating with identity for the same request. The typical
# headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,
# ``X-Identity-Status`` and ``X-Service-Catalog``.
#
# Provide a boolean value to determine whether to send the identity
# headers to provide tenant and user information along with the
# requests to registry service. By default, this option is set to
# ``False``, which means that user and tenant information is not
# available readily. It must be obtained by authenticating. Hence, if
# this is set to ``False``, ``flavor`` must be set to value that
# either includes authentication or authenticated user context.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * flavor
#
#  (boolean value)
#send_identity_headers = false

#
# The amount of time, in seconds, to delay image scrubbing.
#
# When delayed delete is turned on, an image is put into ``pending_delete``
# state upon deletion until the scrubber deletes its image data. Typically, soon
# after the image is put into ``pending_delete`` state, it is available for
# scrubbing. However, scrubbing can be delayed until a later point using this
# configuration option. This option denotes the time period an image spends in
# ``pending_delete`` state before it is available for scrubbing.
#
# It is important to realize that this has storage implications. The larger the
# ``scrub_time``, the longer the time to reclaim backend storage from deleted
# images.
#
# Possible values:
#     * Any non-negative integer
#
# Related options:
#     * ``delayed_delete``
#
#  (integer value)
# Minimum value: 0
#scrub_time = 0

#
# The size of thread pool to be used for scrubbing images.
#
# When there are a large number of images to scrub, it is beneficial to scrub
# images in parallel so that the scrub queue stays in control and the backend
# storage is reclaimed in a timely fashion. This configuration option denotes
# the maximum number of images to be scrubbed in parallel. The default value is
# one, which signifies serial scrubbing. Any value above one indicates parallel
# scrubbing.
#
# Possible values:
#     * Any non-zero positive integer
#
# Related options:
#     * ``delayed_delete``
#
#  (integer value)
# Minimum value: 1
#scrub_pool_size = 1

#
# Turn on/off delayed delete.
#
# Typically when an image is deleted, the ``glance-api`` service puts the image
# into ``deleted`` state and deletes its data at the same time. Delayed delete
# is a feature in Glance that delays the actual deletion of image data until a
# later point in time (as determined by the configuration option
# ``scrub_time``).
# When delayed delete is turned on, the ``glance-api`` service puts the image
# into ``pending_delete`` state upon deletion and leaves the image data in the
# storage backend for the image scrubber to delete at a later time. The image
# scrubber will move the image into ``deleted`` state upon successful deletion
# of image data.
#
# NOTE: When delayed delete is turned on, image scrubber MUST be running as a
# periodic task to prevent the backend storage from filling up with undesired
# usage.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * ``scrub_time``
#     * ``wakeup_time``
#     * ``scrub_pool_size``
#
#  (boolean value)
#delayed_delete = false

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false

# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>

# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit

# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack


[cors]

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Image-Meta-Checksum,X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request. (list
# value)
#allow_headers = Content-MD5,X-Image-Meta-Checksum,X-Storage-Token,Accept-Encoding,X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID


[cors.subdomain]

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Image-Meta-Checksum,X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request. (list
# value)
#allow_headers = Content-MD5,X-Image-Meta-Checksum,X-Storage-Token,Accept-Encoding,X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID


[database]

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20

#
# From oslo.db.concurrency
#

# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false


[glance_store]

#
# From glance.store
#

#
# List of enabled Glance stores.
#
# Register the storage backends to use for storing disk images
# as a comma separated list. The default stores enabled for
# storing disk images with Glance are ``file`` and ``http``.
#
# Possible values:
#     * A comma separated list that could include:
#         * file
#         * http
#         * swift
#         * rbd
#         * sheepdog
#         * cinder
#         * vmware
#
# Related Options:
#     * default_store
#
#  (list value)
#stores = file,http

#
# The default scheme to use for storing images.
#
# Provide a string value representing the default scheme to use for
# storing images. If not set, Glance uses ``file`` as the default
# scheme to store images with the ``file`` store.
#
# NOTE: The value given for this configuration option must be a valid
# scheme for a store registered with the ``stores`` configuration
# option.
#
# Possible values:
#     * file
#     * filesystem
#     * http
#     * https
#     * swift
#     * swift+http
#     * swift+https
#     * swift+config
#     * rbd
#     * sheepdog
#     * cinder
#     * vsphere
#
# Related Options:
#     * stores
#
#  (string value)
# Allowed values: file, filesystem, http, https, swift, swift+http, swift+https, swift+config, rbd, sheepdog, cinder, vsphere
#default_store = file

#
# Minimum interval in seconds to execute updating dynamic storage
# capabilities based on current backend status.
#
# Provide an integer value representing time in seconds to set the
# minimum interval before an update of dynamic storage capabilities
# for a storage backend can be attempted. Setting
# ``store_capabilities_update_min_interval`` does not mean updates
# occur periodically based on the set interval. Rather, the update
# is performed at the elapse of this interval set, if an operation
# of the store is triggered.
#
# By default, this option is set to zero and is disabled. Provide an
# integer value greater than zero to enable this option.
#
# NOTE: For more information on store capabilities and their updates,
# please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo
# /store-capabilities.html
#
# For more information on setting up a particular store in your
# deplyment and help with the usage of this feature, please contact
# the storage driver maintainers listed here:
# http://docs.openstack.org/developer/glance_store/drivers/index.html
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#store_capabilities_update_min_interval = 0

#
# Information to match when looking for cinder in the service catalog.
#
# When the ``cinder_endpoint_template`` is not set and any of
# ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, ``cinder_store_password`` is not set,
# cinder store uses this information to lookup cinder endpoint from the service
# catalog in the current context. ``cinder_os_region_name``, if set, is taken
# into consideration to fetch the appropriate endpoint.
#
# The service catalog can be listed by the ``openstack catalog list`` command.
#
# Possible values:
#     * A string of of the following form:
#       ``<service_type>:<service_name>:<endpoint_type>``
#       At least ``service_type`` and ``endpoint_type`` should be specified.
#       ``service_name`` can be omitted.
#
# Related options:
#     * cinder_os_region_name
#     * cinder_endpoint_template
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#     * cinder_store_password
#
#  (string value)
#cinder_catalog_info = volumev2::publicURL

#
# Override service catalog lookup with template for cinder endpoint.
#
# When this option is set, this value is used to generate cinder endpoint,
# instead of looking up from the service catalog.
# This value is ignored if ``cinder_store_auth_address``,
# ``cinder_store_user_name``, ``cinder_store_project_name``, and
# ``cinder_store_password`` are specified.
#
# If this configuration option is set, ``cinder_catalog_info`` will be ignored.
#
# Possible values:
#     * URL template string for cinder endpoint, where ``%%(tenant)s`` is
#       replaced with the current tenant (project) name.
#       For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s``
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#     * cinder_store_password
#     * cinder_catalog_info
#
#  (string value)
#cinder_endpoint_template = <None>

#
# Region name to lookup cinder service from the service catalog.
#
# This is used only when ``cinder_catalog_info`` is used for determining the
# endpoint. If set, the lookup for cinder endpoint by this node is filtered to
# the specified region. It is useful when multiple regions are listed in the
# catalog. If this is not set, the endpoint is looked up from every region.
#
# Possible values:
#     * A string that is a valid region name.
#
# Related options:
#     * cinder_catalog_info
#
#  (string value)
# Deprecated group/name - [glance_store]/os_region_name
#cinder_os_region_name = <None>

#
# Location of a CA certificates file used for cinder client requests.
#
# The specified CA certificates file, if set, is used to verify cinder
# connections via HTTPS endpoint. If the endpoint is HTTP, this value is
# ignored.
# ``cinder_api_insecure`` must be set to ``True`` to enable the verification.
#
# Possible values:
#     * Path to a ca certificates file
#
# Related options:
#     * cinder_api_insecure
#
#  (string value)
#cinder_ca_certificates_file = <None>

#
# Number of cinderclient retries on failed http calls.
#
# When a call failed by any errors, cinderclient will retry the call up to the
# specified times after sleeping a few seconds.
#
# Possible values:
#     * A positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#cinder_http_retries = 3

#
# Time period, in seconds, to wait for a cinder volume transition to
# complete.
#
# When the cinder volume is created, deleted, or attached to the glance node to
# read/write the volume data, the volume's state is changed. For example, the
# newly created volume status changes from ``creating`` to ``available`` after
# the creation process is completed. This specifies the maximum time to wait for
# the status change. If a timeout occurs while waiting, or the status is changed
# to an unexpected value (e.g. `error``), the image creation fails.
#
# Possible values:
#     * A positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#cinder_state_transition_timeout = 300

#
# Allow to perform insecure SSL requests to cinder.
#
# If this option is set to True, HTTPS endpoint connection is verified using the
# CA certificates file specified by ``cinder_ca_certificates_file`` option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * cinder_ca_certificates_file
#
#  (boolean value)
#cinder_api_insecure = false

#
# The address where the cinder authentication service is listening.
#
# When all of ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, and ``cinder_store_password`` options are
# specified, the specified values are always used for the authentication.
# This is useful to hide the image volumes from users by storing them in a
# project/tenant specific to the image service. It also enables users to share
# the image volume among other projects under the control of glance's ACL.
#
# If either of these options are not set, the cinder endpoint is looked up
# from the service catalog, and current context's user and project are used.
#
# Possible values:
#     * A valid authentication service address, for example:
#       ``http://openstack.example.org/identity/v2.0``
#
# Related options:
#     * cinder_store_user_name
#     * cinder_store_password
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_auth_address = <None>

#
# User name to authenticate against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
#     * A valid user name
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_password
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_user_name = <None>

#
# Password for the user authenticating against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
#     * A valid password for the user specified by ``cinder_store_user_name``
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_password = <None>

#
# Project name where the image volume is stored in cinder.
#
# If this configuration option is not set, the project in current context is
# used.
#
# This must be used with all the following related options. If any of these are
# not specified, the project of the current context is used.
#
# Possible values:
#     * A valid project name
#
# Related options:
#     * ``cinder_store_auth_address``
#     * ``cinder_store_user_name``
#     * ``cinder_store_password``
#
#  (string value)
#cinder_store_project_name = <None>

#
# Path to the rootwrap configuration file to use for running commands as root.
#
# The cinder store requires root privileges to operate the image volumes (for
# connecting to iSCSI/FC volumes and reading/writing the volume data, etc.).
# The configuration file should allow the required commands by cinder store and
# os-brick library.
#
# Possible values:
#     * Path to the rootwrap config file
#
# Related options:
#     * None
#
#  (string value)
#rootwrap_config = /etc/glance/rootwrap.conf

#
# Directory to which the filesystem backend store writes images.
#
# Upon start up, Glance creates the directory if it doesn't already
# exist and verifies write access to the user under which
# ``glance-api`` runs. If the write access isn't available, a
# ``BadStoreConfiguration`` exception is raised and the filesystem
# store may not be available for adding new images.
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
#     * A valid path to a directory
#
# Related options:
#     * ``filesystem_store_datadirs``
#     * ``filesystem_store_file_perm``
#
#  (string value)
#filesystem_store_datadir = /var/lib/glance/images

#
# List of directories and their priorities to which the filesystem
# backend store writes images.
#
# The filesystem store can be configured to store images in multiple
# directories as opposed to using a single directory specified by the
# ``filesystem_store_datadir`` configuration option. When using
# multiple directories, each directory can be given an optional
# priority to specify the preference order in which they should
# be used. Priority is an integer that is concatenated to the
# directory path with a colon where a higher value indicates higher
# priority. When two directories have the same priority, the directory
# with most free space is used. When no priority is specified, it
# defaults to zero.
#
# More information on configuring filesystem store with multiple store
# directories can be found at
# http://docs.openstack.org/developer/glance/configuring.html
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
#     * List of strings of the following form:
#         * ``<a valid directory path>:<optional integer priority>``
#
# Related options:
#     * ``filesystem_store_datadir``
#     * ``filesystem_store_file_perm``
#
#  (multi valued)
#filesystem_store_datadirs =

#
# Filesystem store metadata file.
#
# The path to a file which contains the metadata to be returned with
# any location associated with the filesystem store. The file must
# contain a valid JSON object. The object should contain the keys
# ``id`` and ``mountpoint``. The value for both keys should be a
# string.
#
# Possible values:
#     * A valid path to the store metadata file
#
# Related options:
#     * None
#
#  (string value)
#filesystem_store_metadata_file = <None>

#
# File access permissions for the image files.
#
# Set the intended file access permissions for image data. This provides
# a way to enable other services, e.g. Nova, to consume images directly
# from the filesystem store. The users running the services that are
# intended to be given access to could be made a member of the group
# that owns the files created. Assigning a value less then or equal to
# zero for this configuration option signifies that no changes be made
# to the  default permissions. This value will be decoded as an octal
# digit.
#
# For more information, please refer the documentation at
# http://docs.openstack.org/developer/glance/configuring.html
#
# Possible values:
#     * A valid file access permission
#     * Zero
#     * Any negative integer
#
# Related options:
#     * None
#
#  (integer value)
#filesystem_store_file_perm = 0

#
# Path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Certificate Authority file to verify the remote server certificate. If
# this option is set, the ``https_insecure`` option will be ignored and
# the CA file specified will be used to authenticate the server
# certificate and establish a secure connection to the server.
#
# Possible values:
#     * A valid path to a CA file
#
# Related options:
#     * https_insecure
#
#  (string value)
#https_ca_certificates_file = <None>

#
# Set verification of the remote server certificate.
#
# This configuration option takes in a boolean value to determine
# whether or not to verify the remote server certificate. If set to
# True, the remote server certificate is not verified. If the option is
# set to False, then the default CA truststore is used for verification.
#
# This option is ignored if ``https_ca_certificates_file`` is set.
# The remote server certificate will then be verified using the file
# specified using the ``https_ca_certificates_file`` option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * https_ca_certificates_file
#
#  (boolean value)
#https_insecure = true

#
# The http/https proxy information to be used to connect to the remote
# server.
#
# This configuration option specifies the http/https proxy information
# that should be used to connect to the remote server. The proxy
# information should be a key value pair of the scheme and proxy, for
# example, http:10.0.0.1:3128. You can also specify proxies for multiple
# schemes by separating the key value pairs with a comma, for example,
# http:10.0.0.1:3128, https:10.0.0.1:1080.
#
# Possible values:
#     * A comma separated list of scheme:proxy pairs as described above
#
# Related options:
#     * None
#
#  (dict value)
#http_proxy_information =

#
# Size, in megabytes, to chunk RADOS images into.
#
# Provide an integer value representing the size in megabytes to chunk
# Glance images into. The default chunk size is 8 megabytes. For optimal
# performance, the value should be a power of two.
#
# When Ceph's RBD object storage system is used as the storage backend
# for storing Glance images, the images are chunked into objects of the
# size set using this option. These chunked objects are then stored
# across the distributed block data store to use for Glance.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#rbd_store_chunk_size = 8

#
# RADOS pool in which images are stored.
#
# When RBD is used as the storage backend for storing Glance images, the
# images are stored by means of logical grouping of the objects (chunks
# of images) into a ``pool``. Each pool is defined with the number of
# placement groups it can contain. The default pool that is used is
# 'images'.
#
# More information on the RBD storage backend can be found here:
# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
#
# Possible Values:
#     * A valid pool name
#
# Related options:
#     * None
#
#  (string value)
#rbd_store_pool = images

#
# RADOS user to authenticate as.
#
# This configuration option takes in the RADOS user to authenticate as.
# This is only needed when RADOS authentication is enabled and is
# applicable only if the user is using Cephx authentication. If the
# value for this option is not set by the user or is set to None, a
# default value will be chosen, which will be based on the client.
# section in rbd_store_ceph_conf.
#
# Possible Values:
#     * A valid RADOS user
#
# Related options:
#     * rbd_store_ceph_conf
#
#  (string value)
#rbd_store_user = <None>

#
# Ceph configuration file path.
#
# This configuration option takes in the path to the Ceph configuration
# file to be used. If the value for this option is not set by the user
# or is set to None, librados will locate the default configuration file
# which is located at /etc/ceph/ceph.conf. If using Cephx
# authentication, this file should include a reference to the right
# keyring in a client.<USER> section
#
# Possible Values:
#     * A valid path to a configuration file
#
# Related options:
#     * rbd_store_user
#
#  (string value)
#rbd_store_ceph_conf = /etc/ceph/ceph.conf

#
# Timeout value for connecting to Ceph cluster.
#
# This configuration option takes in the timeout value in seconds used
# when connecting to the Ceph cluster i.e. it sets the time to wait for
# glance-api before closing the connection. This prevents glance-api
# hangups during the connection to RBD. If the value for this option
# is set to less than or equal to 0, no timeout is set and the default
# librados value is used.
#
# Possible Values:
#     * Any integer value
#
# Related options:
#     * None
#
#  (integer value)
#rados_connect_timeout = 0

#
# Chunk size for images to be stored in Sheepdog data store.
#
# Provide an integer value representing the size in mebibyte
# (1048576 bytes) to chunk Glance images into. The default
# chunk size is 64 mebibytes.
#
# When using Sheepdog distributed storage system, the images are
# chunked into objects of this size and then stored across the
# distributed data store to use for Glance.
#
# Chunk sizes, if a power of two, help avoid fragmentation and
# enable improved performance.
#
# Possible values:
#     * Positive integer value representing size in mebibytes.
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 1
#sheepdog_store_chunk_size = 64

#
# Port number on which the sheep daemon will listen.
#
# Provide an integer value representing a valid port number on
# which you want the Sheepdog daemon to listen on. The default
# port is 7000.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages it receives on
# the port number set using ``sheepdog_store_port`` option to store
# chunks of Glance images.
#
# Possible values:
#     * A valid port number (0 to 65535)
#
# Related Options:
#     * sheepdog_store_address
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000

#
# Address to bind the Sheepdog daemon to.
#
# Provide a string value representing the address to bind the
# Sheepdog daemon to. The default address set for the 'sheep'
# is 127.0.0.1.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages directed to the
# address set using ``sheepdog_store_address`` option to store
# chunks of Glance images.
#
# Possible values:
#     * A valid IPv4 address
#     * A valid IPv6 address
#     * A valid hostname
#
# Related Options:
#     * sheepdog_store_port
#
#  (string value)
#sheepdog_store_address = 127.0.0.1

#
# Set verification of the server certificate.
#
# This boolean determines whether or not to verify the server
# certificate. If this option is set to True, swiftclient won't check
# for a valid SSL certificate when authenticating. If the option is set
# to False, then the default CA truststore is used for verification.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * swift_store_cacert
#
#  (boolean value)
#swift_store_auth_insecure = false

#
# Path to the CA bundle file.
#
# This configuration option enables the operator to specify the path to
# a custom Certificate Authority file for SSL verification when
# connecting to Swift.
#
# Possible values:
#     * A valid path to a CA file
#
# Related options:
#     * swift_store_auth_insecure
#
#  (string value)
#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt

#
# The region of Swift endpoint to use by Glance.
#
# Provide a string value representing a Swift region where Glance
# can connect to for image storage. By default, there is no region
# set.
#
# When Glance uses Swift as the storage backend to store images
# for a specific tenant that has multiple endpoints, setting of a
# Swift region with ``swift_store_region`` allows Glance to connect
# to Swift in the specified region as opposed to a single region
# connectivity.
#
# This option can be configured for both single-tenant and
# multi-tenant storage.
#
# NOTE: Setting the region with ``swift_store_region`` is
# tenant-specific and is necessary ``only if`` the tenant has
# multiple endpoints across different regions.
#
# Possible values:
#     * A string value representing a valid Swift region.
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_region = RegionTwo

#
# The URL endpoint to use for Swift backend storage.
#
# Provide a string value representing the URL endpoint to use for
# storing Glance images in Swift store. By default, an endpoint
# is not set and the storage URL returned by ``auth`` is used.
# Setting an endpoint with ``swift_store_endpoint`` overrides the
# storage URL and is used for Glance image storage.
#
# NOTE: The URL should include the path up to, but excluding the
# container. The location of an object is obtained by appending
# the container and object to the configured URL.
#
# Possible values:
#     * String value representing a valid URL path up to a Swift container
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name

#
# Endpoint Type of Swift service.
#
# This string value indicates the endpoint type to use to fetch the
# Swift endpoint. The endpoint type determines the actions the user will
# be allowed to perform, for instance, reading and writing to the Store.
# This setting is only used if swift_store_auth_version is greater than
# 1.
#
# Possible values:
#     * publicURL
#     * adminURL
#     * internalURL
#
# Related options:
#     * swift_store_endpoint
#
#  (string value)
# Allowed values: publicURL, adminURL, internalURL
#swift_store_endpoint_type = publicURL

#
# Type of Swift service to use.
#
# Provide a string value representing the service type to use for
# storing images while using Swift backend storage. The default
# service type is set to ``object-store``.
#
# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
# this configuration option needs to be ``object-store``. If using
# a higher version of Keystone or a different auth scheme, this
# option may be modified.
#
# Possible values:
#     * A string representing a valid service type for Swift storage.
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_service_type = object-store

#
# Name of single container to store images/name prefix for multiple containers
#
# When a single container is being used to store images, this configuration
# option indicates the container within the Glance account to be used for
# storing all images. When multiple containers are used to store images, this
# will be the name prefix for all containers. Usage of single/multiple
# containers can be controlled using the configuration option
# ``swift_store_multiple_containers_seed``.
#
# When using multiple containers, the containers will be named after the value
# set for this configuration option with the first N chars of the image UUID
# as the suffix delimited by an underscore (where N is specified by
# ``swift_store_multiple_containers_seed``).
#
# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
# the container ``glance_fda``. All dashes in the UUID are included when
# creating the container name but do not count toward the character limit, so
# when N=10 the container name would be ``glance_fdae39a1-ba.``
#
# Possible values:
#     * If using single container, this configuration option can be any string
#       that is a valid swift container name in Glance's Swift account
#     * If using multiple containers, this configuration option can be any
#       string as long as it satisfies the container naming rules enforced by
#       Swift. The value of ``swift_store_multiple_containers_seed`` should be
#       taken into account as well.
#
# Related options:
#     * ``swift_store_multiple_containers_seed``
#     * ``swift_store_multi_tenant``
#     * ``swift_store_create_container_on_put``
#
#  (string value)
#swift_store_container = glance

#
# The size threshold, in MB, after which Glance will start segmenting image
# data.
#
# Swift has an upper limit on the size of a single uploaded object. By default,
# this is 5GB. To upload objects bigger than this limit, objects are segmented
# into multiple smaller objects that are tied together with a manifest file.
# For more detail, refer to
# http://docs.openstack.org/developer/swift/overview_large_objects.html
#
# This configuration option specifies the size threshold over which the Swift
# driver will start segmenting image data into multiple smaller files.
# Currently, the Swift driver only supports creating Dynamic Large Objects.
#
# NOTE: This should be set by taking into account the large object limit
# enforced by the Swift cluster in consideration.
#
# Possible values:
#     * A positive integer that is less than or equal to the large object limit
#       enforced by the Swift cluster in consideration.
#
# Related options:
#     * ``swift_store_large_object_chunk_size``
#
#  (integer value)
# Minimum value: 1
#swift_store_large_object_size = 5120

#
# The maximum size, in MB, of the segments when image data is segmented.
#
# When image data is segmented to upload images that are larger than the limit
# enforced by the Swift cluster, image data is broken into segments that are no
# bigger than the size specified by this configuration option.
# Refer to ``swift_store_large_object_size`` for more detail.
#
# For example: if ``swift_store_large_object_size`` is 5GB and
# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
# segmented into 7 segments where the first six segments will be 1GB in size and
# the seventh segment will be 0.2GB.
#
# Possible values:
#     * A positive integer that is less than or equal to the large object limit
#       enforced by Swift cluster in consideration.
#
# Related options:
#     * ``swift_store_large_object_size``
#
#  (integer value)
# Minimum value: 1
#swift_store_large_object_chunk_size = 200

#
# Create container, if it doesn't already exist, when uploading image.
#
# At the time of uploading an image, if the corresponding container doesn't
# exist, it will be created provided this configuration option is set to True.
# By default, it won't be created. This behavior is applicable for both single
# and multiple containers mode.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#swift_store_create_container_on_put = false

#
# Store images in tenant's Swift account.
#
# This enables multi-tenant storage mode which causes Glance images to be stored
# in tenant specific Swift accounts. If this is disabled, Glance stores all
# images in its own account. More details multi-tenant store can be found at
# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#swift_store_multi_tenant = false

#
# Seed indicating the number of containers to use for storing images.
#
# When using a single-tenant store, images can be stored in one or more than one
# containers. When set to 0, all images will be stored in one single container.
# When set to an integer value between 1 and 32, multiple containers will be
# used to store images. This configuration option will determine how many
# containers are created. The total number of containers that will be used is
# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
# will be used to store images.
#
# Please refer to ``swift_store_container`` for more detail on the naming
# convention. More detail about using multiple containers can be found at
# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
# multiple-containers.html
#
# NOTE: This is used only when swift_store_multi_tenant is disabled.
#
# Possible values:
#     * A non-negative integer less than or equal to 32
#
# Related options:
#     * ``swift_store_container``
#     * ``swift_store_multi_tenant``
#     * ``swift_store_create_container_on_put``
#
#  (integer value)
# Minimum value: 0
# Maximum value: 32
#swift_store_multiple_containers_seed = 0

#
# List of tenants that will be granted admin access.
#
# This is a list of tenants that will be granted read/write access on
# all Swift containers created by Glance in multi-tenant mode. The
# default value is an empty list.
#
# Possible values:
#     * A comma separated list of strings representing UUIDs of Keystone
#       projects/tenants
#
# Related options:
#     * None
#
#  (list value)
#swift_store_admin_tenants =

#
# SSL layer compression for HTTPS Swift requests.
#
# Provide a boolean value to determine whether or not to compress
# HTTPS Swift requests for images at the SSL layer. By default,
# compression is enabled.
#
# When using Swift as the backend store for Glance image storage,
# SSL layer compression of HTTPS Swift requests can be set using
# this option. If set to False, SSL layer compression of HTTPS
# Swift requests is disabled. Disabling this option may improve
# performance for images which are already in a compressed format,
# for example, qcow2.
#
# Possible values:
#     * True
#     * False
#
# Related Options:
#     * None
#
#  (boolean value)
#swift_store_ssl_compression = true

#
# The number of times a Swift download will be retried before the
# request fails.
#
# Provide an integer value representing the number of times an image
# download must be retried before erroring out. The default value is
# zero (no retry on a failed image download). When set to a positive
# integer value, ``swift_store_retry_get_count`` ensures that the
# download is attempted this many more times upon a download failure
# before sending an error message.
#
# Possible values:
#     * Zero
#     * Positive integer value
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#swift_store_retry_get_count = 0

#
# Time in seconds defining the size of the window in which a new
# token may be requested before the current token is due to expire.
#
# Typically, the Swift storage driver fetches a new token upon the
# expiration of the current token to ensure continued access to
# Swift. However, some Swift transactions (like uploading image
# segments) may not recover well if the token expires on the fly.
#
# Hence, by fetching a new token before the current token expiration,
# we make sure that the token does not expire or is close to expiry
# before a transaction is attempted. By default, the Swift storage
# driver requests for a new token 60 seconds or less before the
# current token expiration.
#
# Possible values:
#     * Zero
#     * Positive integer value
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#swift_store_expire_soon_interval = 60

#
# Use trusts for multi-tenant Swift store.
#
# This option instructs the Swift store to create a trust for each
# add/get request when the multi-tenant store is in use. Using trusts
# allows the Swift store to avoid problems that can be caused by an
# authentication token expiring during the upload or download of data.
#
# By default, ``swift_store_use_trusts`` is set to ``True``(use of
# trusts is enabled). If set to ``False``, a user token is used for
# the Swift connection instead, eliminating the overhead of trust
# creation.
#
# NOTE: This option is considered only when
# ``swift_store_multi_tenant`` is set to ``True``
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * swift_store_multi_tenant
#
#  (boolean value)
#swift_store_use_trusts = true

#
# Reference to default Swift account/backing store parameters.
#
# Provide a string value representing a reference to the default set
# of parameters required for using swift account/backing store for
# image storage. The default reference value for this configuration
# option is 'ref1'. This configuration option dereferences the
# parameters and facilitates image storage in Swift storage backend
# every time a new image is added.
#
# Possible values:
#     * A valid string value
#
# Related options:
#     * None
#
#  (string value)
#default_swift_reference = ref1

# DEPRECATED: Version of the authentication service to use. Valid versions are 2
# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_version' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_version = 2

# DEPRECATED: The address where the Swift authentication service is listening.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_address' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_address = <None>

# DEPRECATED: The user to authenticate against the Swift authentication service.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'user' in the Swift back-end configuration file is set instead.
#swift_store_user = <None>

# DEPRECATED: Auth key for the user authenticating against the Swift
# authentication service. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'key' in the Swift back-end configuration file is used
# to set the authentication key instead.
#swift_store_key = <None>

#
# Absolute path to the file containing the swift account(s)
# configurations.
#
# Include a string value representing the path to a configuration
# file that has references for each of the configured Swift
# account(s)/backing stores. By default, no file path is specified
# and customized Swift referencing is disabled. Configuring this
# option is highly recommended while using Swift storage backend for
# image storage as it avoids storage of credentials in the database.
#
# Possible values:
#     * String value representing an absolute path on the glance-api
#       node
#
# Related options:
#     * None
#
#  (string value)
#swift_store_config_file = <None>

#
# Address of the ESX/ESXi or vCenter Server target system.
#
# This configuration option sets the address of the ESX/ESXi or vCenter
# Server target system. This option is required when using the VMware
# storage backend. The address can contain an IP address (127.0.0.1) or
# a DNS name (www.my-domain.com).
#
# Possible Values:
#     * A valid IPv4 or IPv6 address
#     * A valid DNS name
#
# Related options:
#     * vmware_server_username
#     * vmware_server_password
#
#  (string value)
#vmware_server_host = 127.0.0.1

#
# Server username.
#
# This configuration option takes the username for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
#     * Any string that is the username for a user with appropriate
#       privileges
#
# Related options:
#     * vmware_server_host
#     * vmware_server_password
#
#  (string value)
#vmware_server_username = root

#
# Server password.
#
# This configuration option takes the password for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
#     * Any string that is a password corresponding to the username
#       specified using the "vmware_server_username" option
#
# Related options:
#     * vmware_server_host
#     * vmware_server_username
#
#  (string value)
#vmware_server_password = vmware

#
# The number of VMware API retries.
#
# This configuration option specifies the number of times the VMware
# ESX/VC server API must be retried upon connection related issues or
# server API call overload. It is not possible to specify 'retry
# forever'.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#vmware_api_retry_count = 10

#
# Interval in seconds used for polling remote tasks invoked on VMware
# ESX/VC server.
#
# This configuration option takes in the sleep time in seconds for polling an
# on-going async task as part of the VMWare ESX/VC server API call.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#vmware_task_poll_interval = 5

#
# The directory where the glance images will be stored in the datastore.
#
# This configuration option specifies the path to the directory where the
# glance images will be stored in the VMware datastore. If this option
# is not set,  the default directory where the glance images are stored
# is openstack_glance.
#
# Possible Values:
#     * Any string that is a valid path to a directory
#
# Related options:
#     * None
#
#  (string value)
#vmware_store_image_dir = /openstack_glance

#
# Set verification of the ESX/vCenter server certificate.
#
# This configuration option takes a boolean value to determine
# whether or not to verify the ESX/vCenter server certificate. If this
# option is set to True, the ESX/vCenter server certificate is not
# verified. If this option is set to False, then the default CA
# truststore is used for verification.
#
# This option is ignored if the "vmware_ca_file" option is set. In that
# case, the ESX/vCenter server certificate will then be verified using
# the file specified using the "vmware_ca_file" option .
#
# Possible Values:
#     * True
#     * False
#
# Related options:
#     * vmware_ca_file
#
#  (boolean value)
# Deprecated group/name - [glance_store]/vmware_api_insecure
#vmware_insecure = false

#
# Absolute path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Cerificate Authority File to verify the ESX/vCenter certificate.
#
# If this option is set, the "vmware_insecure" option will be ignored
# and the CA file specified will be used to authenticate the ESX/vCenter
# server certificate and establish a secure connection to the server.
#
# Possible Values:
#     * Any string that is a valid absolute path to a CA file
#
# Related options:
#     * vmware_insecure
#
#  (string value)
#vmware_ca_file = /etc/ssl/certs/ca-certificates.crt

#
# The datastores where the image can be stored.
#
# This configuration option specifies the datastores where the image can
# be stored in the VMWare store backend. This option may be specified
# multiple times for specifying multiple datastores. The datastore name
# should be specified after its datacenter path, separated by ":". An
# optional weight may be given after the datastore name, separated again
# by ":" to specify the priority. Thus, the required format becomes
# <datacenter_path>:<datastore_name>:<optional_weight>.
#
# When adding an image, the datastore with highest weight will be
# selected, unless there is not enough free space available in cases
# where the image size is already known. If no weight is given, it is
# assumed to be zero and the directory will be considered for selection
# last. If multiple datastores have the same weight, then the one with
# the most free space available is selected.
#
# Possible Values:
#     * Any string of the format:
#       <datacenter_path>:<datastore_name>:<optional_weight>
#
# Related options:
#    * None
#
#  (multi valued)
#vmware_datastores =


[image_format]

#
# From glance.api
#

# Supported values for the 'container_format' image attribute (list value)
# Deprecated group/name - [DEFAULT]/container_formats
#container_formats = ami,ari,aki,bare,ovf,ova,docker

# Supported values for the 'disk_format' image attribute (list value)
# Deprecated group/name - [DEFAULT]/disk_formats
#disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso


[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
# clients are redirected to this endpoint to authenticate. Although this
# endpoint should  ideally be unversioned, client support in the wild varies.
# If you're using a versioned v2 endpoint here, then this  should *not* be the
# same endpoint the service user utilizes  for validating tokens, because normal
# end users may not be  able to reach that endpoint. (string value)
#auth_uri = <None>

# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>

# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>

# How many times are we trying to reconnect when communicating with Identity API
# Server. (integer value)
#http_request_max_retries = 3

# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>

# Required if identity server requires client certificate (string value)
#certfile = <None>

# Required if identity server requires client certificate (string value)
#keyfile = <None>

# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# The region in which the identity server can be found. (string value)
#region_name = <None>

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>

# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>

# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set to
# -1 to disable caching completely. (integer value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is retrieved from
# the Identity service (in seconds). A high number of revocation events combined
# with a low cache duration may significantly reduce performance. Only valid for
# PKI tokens. (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None

# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>

# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every memcached server.
# (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it if
# not. "strict" like "permissive" but if the bind type is unknown the token will
# be rejected. "required" any form of token binding is needed to be allowed.
# Finally the name of a binding method that must be present in tokens. (string
# value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5

# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>


[matchmaker_redis]

#
# From oslo.messaging
#

# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1

# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379

# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =

# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =

# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq

# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000

# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000

# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000


[oslo_concurrency]

#
# From oslo.concurrency
#

# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false

# Directory to use for lock files.  For security, the specified directory should
# only be writable by the user running the processes that need locking. Defaults
# to environment variable OSLO_LOCK_PATH. If external locks are used, a lock
# path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false

# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =

# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>

# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false

# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =

# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =

# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The deadline for an rpc reply message delivery. Only used when caller does not
# provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic

# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-robin
# fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are messaging,
# messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than one
# RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin

# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost

# DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672

# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false

# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest

# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest

# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN

# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30

# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue. If
# you just want to make sure that all queues (except  those with auto-generated
# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically deleted.
# The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows unlimited
# messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the heartbeat.
# (integer value)
#heartbeat_rate = 2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false

# Maximum number of channels to allow (integer value)
#channel_max = <None>

# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>

# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3

# Enable SSL (boolean value)
#ssl = <None>

# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>

# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25

# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value)
#tcp_user_timeout = 0.25

# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25

# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single

# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30

# Maximum number of connections to create above `pool_max_size`. (integer value)
#pool_max_overflow = 0

# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30

# Lifetime of a connection (since creation) in seconds or None for no recycling.
# Expired connections are closed on acquire. (integer value)
#pool_recycle = 600

# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60

# Persist notification messages. (boolean value)
#notification_persistence = false

# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

# Max number of not acknowledged message which RabbitMQ can send to notification
# listener. (integer value)
#notification_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25

# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60

# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc

# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply

# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100

# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending reply.
# -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending reply.
# (floating point value)
#rpc_reply_retry_delay = 0.25

# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25


[oslo_messaging_zmq]

#
# From oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false


[oslo_middleware]

#
# From oslo.middleware.http_proxy_to_wsgi
#

# Whether the application is behind a proxy or not. This determines if the
# middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d


[paste_deploy]

#
# From glance.api
#

#
# Deployment flavor to use in the server application pipeline.
#
# Provide a string value representing the appropriate deployment
# flavor used in the server application pipleline. This is typically
# the partial name of a pipeline in the paste configuration file with
# the service name removed.
#
# For example, if your paste section name in the paste configuration
# file is [pipeline:glance-api-keystone], set ``flavor`` to
# ``keystone``.
#
# Possible values:
#     * String value representing a partial pipeline name.
#
# Related Options:
#     * config_file
#
#  (string value)
#flavor = keystone

#
# Name of the paste configuration file.
#
# Provide a string value representing the name of the paste
# configuration file to use for configuring piplelines for
# server application deployments.
#
# NOTES:
#     * Provide the name or the path relative to the glance directory
#       for the paste configuration file and not the absolute path.
#     * The sample paste configuration file shipped with Glance need
#       not be edited in most cases as it comes with ready-made
#       pipelines for all common deployment flavors.
#
# If no value is specified for this option, the ``paste.ini`` file
# with the prefix of the corresponding Glance service's configuration
# file name will be searched for in the known configuration
# directories. (For example, if this option is missing from or has no
# value set in ``glance-api.conf``, the service will look for a file
# named ``glance-api-paste.ini``.) If the paste configuration file is
# not found, the service will not start.
#
# Possible values:
#     * A string value representing the name of the paste configuration
#       file.
#
# Related Options:
#     * flavor
#
#  (string value)
#config_file = glance-api-paste.ini


[profiler]

#
# From glance.api
#

#
# Enables the profiling for all services on this node. Default value is False
# (fully disable the profiling feature).
#
# Possible values:
#
# * True: Enables the feature
# * False: Disables the feature. The profiling cannot be started via this
# project
# operations. If the profiling is triggered by another project, this project
# part
# will be empty.
#  (boolean value)
# Deprecated group/name - [profiler]/profiler_enabled
#enabled = false

#
# Enables SQL requests profiling in services. Default value is False (SQL
# requests won't be traced).
#
# Possible values:
#
# * True: Enables SQL requests profiling. Each SQL query will be part of the
# trace and can the be analyzed by how much time was spent for that.
# * False: Disables SQL requests profiling. The spent time is only shown on a
# higher level of operations. Single SQL queries cannot be analyzed this
# way.
#  (boolean value)
#trace_sqlalchemy = false

#
# Secret key(s) to use for encrypting context data for performance profiling.
# This string value should have the following format: <key1>[,<key2>,...<keyn>],
# where each key is some random string. A user who triggers the profiling via
# the REST API has to set one of these keys in the headers of the REST API call
# to include profiling results of this node for this particular project.
#
# Both "enabled" flag and "hmac_keys" config options should be set to enable
# profiling. Also, to generate correct profiling information across all services
# at least one key needs to be consistent between OpenStack projects. This
# ensures it can be used from client side to generate the trace, containing
# information from all possible resources. (string value)
#hmac_keys = SECRET_KEY

#
# Connection string for a notifier backend. Default value is messaging:// which
# sets the notifier to oslo_messaging.
#
# Examples of possible values:
#
# * messaging://: use oslo_messaging driver for sending notifications.
#  (string value)
#connection_string = messaging://


[store_type_location_strategy]

#
# From glance.api
#

#
# Preference order of storage backends.
#
# Provide a comma separated list of store names in the order in
# which images should be retrieved from storage backends.
# These store names must be registered with the ``stores``
# configuration option.
#
# NOTE: The ``store_type_preference`` configuration option is applied
# only if ``store_type`` is chosen as a value for the
# ``location_strategy`` configuration option. An empty list will not
# change the location order.
#
# Possible values:
#     * Empty list
#     * Comma separated list of registered store names. Legal values are:
#         * file
#         * http
#         * rbd
#         * swift
#         * sheepdog
#         * cinder
#         * vmware
#
# Related options:
#     * location_strategy
#     * stores
#
#  (list value)
#store_type_preference =


[task]

#
# From glance.api
#

# Time in hours for which a task lives after, either succeeding or failing
# (integer value)
# Deprecated group/name - [DEFAULT]/task_time_to_live
#task_time_to_live = 48

#
# Task executor to be used to run task scripts.
#
# Provide a string value representing the executor to use for task
# executions. By default, ``TaskFlow`` executor is used.
#
# ``TaskFlow`` helps make task executions easy, consistent, scalable
# and reliable. It also enables creation of lightweight task objects
# and/or functions that are combined together into flows in a
# declarative manner.
#
# Possible values:
#     * taskflow
#
# Related Options:
#     * None
#
#  (string value)
#task_executor = taskflow

#
# Absolute path to the work directory to use for asynchronous
# task operations.
#
# The directory set here will be used to operate over images -
# normally before they are imported in the destination store.
#
# NOTE: When providing a value for ``work_dir``, please make sure
# that enough space is provided for concurrent tasks to run
# efficiently without running out of space.
#
# A rough estimation can be done by multiplying the number of
# ``max_workers`` with an average image size (e.g 500MB). The image
# size estimation should be done based on the average size in your
# deployment. Note that depending on the tasks running you may need
# to multiply this number by some factor depending on what the task
# does. For example, you may want to double the available size if
# image conversion is enabled. All this being said, remember these
# are just estimations and you should do them based on the worst
# case scenario and be prepared to act in case they were wrong.
#
# Possible values:
#     * String value representing the absolute path to the working
#       directory
#
# Related Options:
#     * None
#
#  (string value)
#work_dir = /work_dir


[taskflow_executor]

#
# From glance.api
#

#
# Set the taskflow engine mode.
#
# Provide a string type value to set the mode in which the taskflow
# engine would schedule tasks to the workers on the hosts. Based on
# this mode, the engine executes tasks either in single or multiple
# threads. The possible values for this configuration option are:
# ``serial`` and ``parallel``. When set to ``serial``, the engine runs
# all the tasks in a single thread which results in serial execution
# of tasks. Setting this to ``parallel`` makes the engine run tasks in
# multiple threads. This results in parallel execution of tasks.
#
# Possible values:
#     * serial
#     * parallel
#
# Related options:
#     * max_workers
#
#  (string value)
# Allowed values: serial, parallel
#engine_mode = parallel

#
# Set the number of engine executable tasks.
#
# Provide an integer value to limit the number of workers that can be
# instantiated on the hosts. In other words, this number defines the
# number of parallel tasks that can be executed at the same time by
# the taskflow engine. This value can be greater than one when the
# engine mode is set to parallel.
#
# Possible values:
#     * Integer value greater than or equal to 1
#
# Related options:
#     * engine_mode
#
#  (integer value)
# Minimum value: 1
# Deprecated group/name - [task]/eventlet_executor_pool_size
#max_workers = 10

#
# Set the desired image conversion format.
#
# Provide a valid image format to which you want images to be
# converted before they are stored for consumption by Glance.
# Appropriate image format conversions are desirable for specific
# storage backends in order to facilitate efficient handling of
# bandwidth and usage of the storage infrastructure.
#
# By default, ``conversion_format`` is not set and must be set
# explicitly in the configuration file.
#
# The allowed values for this option are ``raw``, ``qcow2`` and
# ``vmdk``. The  ``raw`` format is the unstructured disk format and
# should be chosen when RBD or Ceph storage backends are used for
# image storage. ``qcow2`` is supported by the QEMU emulator that
# expands dynamically and supports Copy on Write. The ``vmdk`` is
# another common disk format supported by many common virtual machine
# monitors like VMWare Workstation.
#
# Possible values:
#     * qcow2
#     * raw
#     * vmdk
#
# Related options:
#     * disk_formats
#
#  (string value)
# Allowed values: qcow2, raw, vmdk
#conversion_format = raw
glance-api-paste.ini

Configuration for the Image service’s API middleware pipeline is found in the glance-api-paste.ini file.

You should not need to modify this file.

# Use this pipeline for no auth or image caching - DEFAULT
[pipeline:glance-api]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context rootapp

# Use this pipeline for image caching and no auth
[pipeline:glance-api-caching]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context cache rootapp

# Use this pipeline for caching w/ management interface but no auth
[pipeline:glance-api-cachemanagement]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context cache cachemanage rootapp

# Use this pipeline for keystone auth
[pipeline:glance-api-keystone]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context  rootapp

# Use this pipeline for keystone auth with image caching
[pipeline:glance-api-keystone+caching]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context cache rootapp

# Use this pipeline for keystone auth with caching and cache management
[pipeline:glance-api-keystone+cachemanagement]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context cache cachemanage rootapp

# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-api-trusted-auth]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler context rootapp

# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user and uses cache management
[pipeline:glance-api-trusted-auth+cachemanagement]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler context cache cachemanage rootapp

[composite:rootapp]
paste.composite_factory = glance.api:root_app_factory
/: apiversions
/v1: apiv1app
/v2: apiv2app

[app:apiversions]
paste.app_factory = glance.api.versions:create_resource

[app:apiv1app]
paste.app_factory = glance.api.v1.router:API.factory

[app:apiv2app]
paste.app_factory = glance.api.v2.router:API.factory

[filter:healthcheck]
paste.filter_factory = oslo_middleware:Healthcheck.factory
backends = disable_by_file
disable_by_file_path = /etc/glance/healthcheck_disable

[filter:versionnegotiation]
paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory

[filter:cache]
paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory

[filter:cachemanage]
paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory

[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory

[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
delay_auth_decision = true

[filter:gzip]
paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory

[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
hmac_keys = SECRET_KEY  #DEPRECATED
enabled = yes  #DEPRECATED

[filter:cors]
paste.filter_factory =  oslo_middleware.cors:filter_factory
oslo_config_project = glance
oslo_config_program = glance-api

[filter:http_proxy_to_wsgi]
paste.filter_factory = oslo_middleware:HTTPProxyToWSGI.factory
glance-cache.conf

The configuration options for an optional local image cache are found in the glance-cache.conf file.

[DEFAULT]

#
# From glance.cache
#

#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * image_property_quota
#
#  (boolean value)
#allow_additional_image_properties = true

#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_member_quota = 128

#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
#     * ``allow_additional_image_properties``
#
#  (integer value)
#image_property_quota = 128

#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_tag_quota = 128

#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_location_quota = 10

#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
#     * glance.db.sqlalchemy.api
#     * glance.db.registry.api
#     * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
#     * enable_v2_api
#     * enable_v2_registry
#
#  (string value)
#data_api = glance.db.sqlalchemy.api

#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
#     * The value of this configuration option may not be greater than
#       the value specified by ``api_limit_max``.
#     * Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * api_limit_max
#
#  (integer value)
# Minimum value: 1
#limit_param_default = 25

#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * limit_param_default
#
#  (integer value)
# Minimum value: 1
#api_limit_max = 1000

#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_multiple_locations
#     * location_strategy
#
#  (boolean value)
#show_image_direct_url = false

# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_image_direct_url
#     * location_strategy
#
#  (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false

#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
#     * This value should only be increased after careful
#       consideration and must be set less than or equal to
#       8 EiB (9223372036854775808).
#     * This value must be set with careful consideration of the
#       backend storage capacity. Setting this to a very low value
#       may result in a large number of image failures. And, setting
#       this to a very large value may result in faster consumption
#       of storage. Hence, this must be set according to the nature of
#       images created and storage capacity available.
#
# Possible values:
#     * Any positive number less than or equal to 9223372036854775808
#
#  (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776

#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
#     * A string that is a valid concatenation of a non-negative integer
#       representing the storage value and an optional string literal
#       representing storage units as mentioned above.
#
# Related options:
#     * None
#
#  (string value)
#user_storage_quota = 0

#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
#     * If this option is enabled, then ``enable_v1_registry`` must
#       also be set to ``True`` to enable mandatory usage of Registry
#       service with v1 API.
#
#     * If this option is disabled, then the ``enable_v1_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v2_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v2 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_registry
#     * enable_v2_api
#
#  (boolean value)
#enable_v1_api = true

#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
#     * If this option is disabled, then the ``enable_v2_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v1_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v1 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_registry
#     * enable_v1_api
#
#  (boolean value)
#enable_v2_api = true

#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
#     * Use of Registry is mandatory in v1 API, so this option must
#       be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
#     * If deploying only the v2 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_api
#
#  (boolean value)
#enable_v1_registry = true

#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
#     * Use of Registry is optional in v2 API, so this option
#       must only be enabled if both ``enable_v2_api`` is set to
#       ``True`` and the ``data_api`` option is set to
#       ``glance.db.registry.api``.
#
#     * If deploying only the v1 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_api
#     * data_api
#
#  (boolean value)
#enable_v2_registry = true

#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
#     * Valid hostname
#     * Valid IP address
#
# Related options:
#     * None
#
#  (string value)
#pydev_worker_debug_host = localhost

#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
#     * A valid port number
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678

#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
#     * String value representing a valid AES key
#
# Related options:
#     * None
#
#  (string value)
#metadata_encryption_key = <None>

#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
#     * An OpenSSL message digest algorithm identifier
#
# Relation options:
#     * None
#
#  (string value)
#digest_algorithm = sha256

#
# The relative path to sqlite file database that will be used for image cache
# management.
#
# This is a relative path to the sqlite file database that tracks the age and
# usage statistics of image cache. The path is relative to image cache base
# directory, specified by the configuration option ``image_cache_dir``.
#
# This is a lightweight database with just one table.
#
# Possible values:
#     * A valid relative path to sqlite file database
#
# Related options:
#     * ``image_cache_dir``
#
#  (string value)
#image_cache_sqlite_db = cache.db

#
# The driver to use for image cache management.
#
# This configuration option provides the flexibility to choose between the
# different image-cache drivers available. An image-cache driver is responsible
# for providing the essential functions of image-cache like write images to/read
# images from cache, track age and usage of cached images, provide a list of
# cached images, fetch size of the cache, queue images for caching and clean up
# the cache, etc.
#
# The essential functions of a driver are defined in the base class
# ``glance.image_cache.drivers.base.Driver``. All image-cache drivers (existing
# and prospective) must implement this interface. Currently available drivers
# are ``sqlite`` and ``xattr``. These drivers primarily differ in the way they
# store the information about cached images:
#     * The ``sqlite`` driver uses a sqlite database (which sits on every glance
#     node locally) to track the usage of cached images.
#     * The ``xattr`` driver uses the extended attributes of files to store this
#     information. It also requires a filesystem that sets ``atime`` on the
# files
#     when accessed.
#
# Possible values:
#     * sqlite
#     * xattr
#
# Related options:
#     * None
#
#  (string value)
# Allowed values: sqlite, xattr
#image_cache_driver = sqlite

#
# The upper limit on cache size, in bytes, after which the cache-pruner cleans
# up the image cache.
#
# NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a
# hard limit beyond which the image cache would never grow. In fact, depending
# on how often the cache-pruner runs and how quickly the cache fills, the image
# cache can far exceed the size specified here very easily. Hence, care must be
# taken to appropriately schedule the cache-pruner and in setting this limit.
#
# Glance caches an image when it is downloaded. Consequently, the size of the
# image cache grows over time as the number of downloads increases. To keep the
# cache size from becoming unmanageable, it is recommended to run the
# cache-pruner as a periodic task. When the cache pruner is kicked off, it
# compares the current size of image cache and triggers a cleanup if the image
# cache grew beyond the size specified here. After the cleanup, the size of
# cache is less than or equal to size specified here.
#
# Possible values:
#     * Any non-negative integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#image_cache_max_size = 10737418240

#
# The amount of time, in seconds, an incomplete image remains in the cache.
#
# Incomplete images are images for which download is in progress. Please see the
# description of configuration option ``image_cache_dir`` for more detail.
# Sometimes, due to various reasons, it is possible the download may hang and
# the incompletely downloaded image remains in the ``incomplete`` directory.
# This configuration option sets a time limit on how long the incomplete images
# should remain in the ``incomplete`` directory before they are cleaned up.
# Once an incomplete image spends more time than is specified here, it'll be
# removed by cache-cleaner on its next run.
#
# It is recommended to run cache-cleaner as a periodic task on the Glance API
# nodes to keep the incomplete images from occupying disk space.
#
# Possible values:
#     * Any non-negative integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#image_cache_stall_time = 86400

#
# Base directory for image cache.
#
# This is the location where image data is cached and served out of. All cached
# images are stored directly under this directory. This directory also contains
# three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``.
#
# The ``incomplete`` subdirectory is the staging area for downloading images. An
# image is first downloaded to this directory. When the image download is
# successful it is moved to the base directory. However, if the download fails,
# the partially downloaded image file is moved to the ``invalid`` subdirectory.
#
# The ``queue``subdirectory is used for queuing images for download. This is
# used primarily by the cache-prefetcher, which can be scheduled as a periodic
# task like cache-pruner and cache-cleaner, to cache images ahead of their
# usage.
# Upon receiving the request to cache an image, Glance touches a file in the
# ``queue`` directory with the image id as the file name. The cache-prefetcher,
# when running, polls for the files in ``queue`` directory and starts
# downloading them in the order they were created. When the download is
# successful, the zero-sized file is deleted from the ``queue`` directory.
# If the download fails, the zero-sized file remains and it'll be retried the
# next time cache-prefetcher runs.
#
# Possible values:
#     * A valid path
#
# Related options:
#     * ``image_cache_sqlite_db``
#
#  (string value)
#image_cache_dir = <None>

#
# Address the registry server is hosted on.
#
# Possible values:
#     * A valid IP or hostname
#
# Related options:
#     * None
#
#  (string value)
#registry_host = 0.0.0.0

#
# Port the registry server is listening on.
#
# Possible values:
#     * A valid port number
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#registry_port = 9191

#
# Protocol to use for communication with the registry server.
#
# Provide a string value representing the protocol to use for
# communication with the registry server. By default, this option is
# set to ``http`` and the connection is not secure.
#
# This option can be set to ``https`` to establish a secure connection
# to the registry server. In this case, provide a key to use for the
# SSL connection using the ``registry_client_key_file`` option. Also
# include the CA file and cert file using the options
# ``registry_client_ca_file`` and ``registry_client_cert_file``
# respectively.
#
# Possible values:
#     * http
#     * https
#
# Related options:
#     * registry_client_key_file
#     * registry_client_cert_file
#     * registry_client_ca_file
#
#  (string value)
# Allowed values: http, https
#registry_client_protocol = http

#
# Absolute path to the private key file.
#
# Provide a string value representing a valid absolute path to the
# private key file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
# environment variable may be set to a filepath of the key file.
#
# Possible values:
#     * String value representing a valid absolute path to the key
#       file.
#
# Related options:
#     * registry_client_protocol
#
#  (string value)
#registry_client_key_file = /etc/ssl/key/key-file.pem

#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
# environment variable may be set to a filepath of the certificate
# file.
#
# Possible values:
#     * String value representing a valid absolute path to the
#       certificate file.
#
# Related options:
#     * registry_client_protocol
#
#  (string value)
#registry_client_cert_file = /etc/ssl/certs/file.crt

#
# Absolute path to the Certificate Authority file.
#
# Provide a string value representing a valid absolute path to the
# certificate authority file to use for establishing a secure
# connection to the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
# environment variable may be set to a filepath of the CA file.
# This option is ignored if the ``registry_client_insecure`` option
# is set to ``True``.
#
# Possible values:
#     * String value representing a valid absolute path to the CA
#       file.
#
# Related options:
#     * registry_client_protocol
#     * registry_client_insecure
#
#  (string value)
#registry_client_ca_file = /etc/ssl/cafile/file.ca

#
# Set verification of the registry server certificate.
#
# Provide a boolean value to determine whether or not to validate
# SSL connections to the registry server. By default, this option
# is set to ``False`` and the SSL connections are validated.
#
# If set to ``True``, the connection to the registry server is not
# validated via a certifying authority and the
# ``registry_client_ca_file`` option is ignored. This is the
# registry's equivalent of specifying --insecure on the command line
# using glanceclient for the API.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * registry_client_protocol
#     * registry_client_ca_file
#
#  (boolean value)
#registry_client_insecure = false

#
# Timeout value for registry requests.
#
# Provide an integer value representing the period of time in seconds
# that the API server will wait for a registry request to complete.
# The default value is 600 seconds.
#
# A value of 0 implies that a request will never timeout.
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#registry_client_timeout = 600

# DEPRECATED: Whether to pass through the user token when making requests to the
# registry. To prevent failures with token expiration during big files upload,
# it is recommended to set this parameter to False.If "use_user_token" is not in
# effect, then admin credentials can be specified. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#use_user_token = true

# DEPRECATED: The administrators user name. If "use_user_token" is not in
# effect, then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_user = <None>

# DEPRECATED: The administrators password. If "use_user_token" is not in effect,
# then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_password = <None>

# DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
# not in effect, then admin tenant name can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_tenant_name = <None>

# DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
# effect and using keystone auth, then URL of keystone can be specified. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_url = <None>

# DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
# in effect, then auth strategy can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_strategy = noauth

# DEPRECATED: The region for the authentication service. If "use_user_token" is
# not in effect and using keystone auth, then region name can be specified.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_region = <None>

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[glance_store]

#
# From glance.store
#

#
# List of enabled Glance stores.
#
# Register the storage backends to use for storing disk images
# as a comma separated list. The default stores enabled for
# storing disk images with Glance are ``file`` and ``http``.
#
# Possible values:
#     * A comma separated list that could include:
#         * file
#         * http
#         * swift
#         * rbd
#         * sheepdog
#         * cinder
#         * vmware
#
# Related Options:
#     * default_store
#
#  (list value)
#stores = file,http

#
# The default scheme to use for storing images.
#
# Provide a string value representing the default scheme to use for
# storing images. If not set, Glance uses ``file`` as the default
# scheme to store images with the ``file`` store.
#
# NOTE: The value given for this configuration option must be a valid
# scheme for a store registered with the ``stores`` configuration
# option.
#
# Possible values:
#     * file
#     * filesystem
#     * http
#     * https
#     * swift
#     * swift+http
#     * swift+https
#     * swift+config
#     * rbd
#     * sheepdog
#     * cinder
#     * vsphere
#
# Related Options:
#     * stores
#
#  (string value)
# Allowed values: file, filesystem, http, https, swift, swift+http, swift+https, swift+config, rbd, sheepdog, cinder, vsphere
#default_store = file

#
# Minimum interval in seconds to execute updating dynamic storage
# capabilities based on current backend status.
#
# Provide an integer value representing time in seconds to set the
# minimum interval before an update of dynamic storage capabilities
# for a storage backend can be attempted. Setting
# ``store_capabilities_update_min_interval`` does not mean updates
# occur periodically based on the set interval. Rather, the update
# is performed at the elapse of this interval set, if an operation
# of the store is triggered.
#
# By default, this option is set to zero and is disabled. Provide an
# integer value greater than zero to enable this option.
#
# NOTE: For more information on store capabilities and their updates,
# please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo
# /store-capabilities.html
#
# For more information on setting up a particular store in your
# deplyment and help with the usage of this feature, please contact
# the storage driver maintainers listed here:
# http://docs.openstack.org/developer/glance_store/drivers/index.html
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#store_capabilities_update_min_interval = 0

#
# Information to match when looking for cinder in the service catalog.
#
# When the ``cinder_endpoint_template`` is not set and any of
# ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, ``cinder_store_password`` is not set,
# cinder store uses this information to lookup cinder endpoint from the service
# catalog in the current context. ``cinder_os_region_name``, if set, is taken
# into consideration to fetch the appropriate endpoint.
#
# The service catalog can be listed by the ``openstack catalog list`` command.
#
# Possible values:
#     * A string of of the following form:
#       ``<service_type>:<service_name>:<endpoint_type>``
#       At least ``service_type`` and ``endpoint_type`` should be specified.
#       ``service_name`` can be omitted.
#
# Related options:
#     * cinder_os_region_name
#     * cinder_endpoint_template
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#     * cinder_store_password
#
#  (string value)
#cinder_catalog_info = volumev2::publicURL

#
# Override service catalog lookup with template for cinder endpoint.
#
# When this option is set, this value is used to generate cinder endpoint,
# instead of looking up from the service catalog.
# This value is ignored if ``cinder_store_auth_address``,
# ``cinder_store_user_name``, ``cinder_store_project_name``, and
# ``cinder_store_password`` are specified.
#
# If this configuration option is set, ``cinder_catalog_info`` will be ignored.
#
# Possible values:
#     * URL template string for cinder endpoint, where ``%%(tenant)s`` is
#       replaced with the current tenant (project) name.
#       For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s``
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#     * cinder_store_password
#     * cinder_catalog_info
#
#  (string value)
#cinder_endpoint_template = <None>

#
# Region name to lookup cinder service from the service catalog.
#
# This is used only when ``cinder_catalog_info`` is used for determining the
# endpoint. If set, the lookup for cinder endpoint by this node is filtered to
# the specified region. It is useful when multiple regions are listed in the
# catalog. If this is not set, the endpoint is looked up from every region.
#
# Possible values:
#     * A string that is a valid region name.
#
# Related options:
#     * cinder_catalog_info
#
#  (string value)
# Deprecated group/name - [glance_store]/os_region_name
#cinder_os_region_name = <None>

#
# Location of a CA certificates file used for cinder client requests.
#
# The specified CA certificates file, if set, is used to verify cinder
# connections via HTTPS endpoint. If the endpoint is HTTP, this value is
# ignored.
# ``cinder_api_insecure`` must be set to ``True`` to enable the verification.
#
# Possible values:
#     * Path to a ca certificates file
#
# Related options:
#     * cinder_api_insecure
#
#  (string value)
#cinder_ca_certificates_file = <None>

#
# Number of cinderclient retries on failed http calls.
#
# When a call failed by any errors, cinderclient will retry the call up to the
# specified times after sleeping a few seconds.
#
# Possible values:
#     * A positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#cinder_http_retries = 3

#
# Time period, in seconds, to wait for a cinder volume transition to
# complete.
#
# When the cinder volume is created, deleted, or attached to the glance node to
# read/write the volume data, the volume's state is changed. For example, the
# newly created volume status changes from ``creating`` to ``available`` after
# the creation process is completed. This specifies the maximum time to wait for
# the status change. If a timeout occurs while waiting, or the status is changed
# to an unexpected value (e.g. `error``), the image creation fails.
#
# Possible values:
#     * A positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#cinder_state_transition_timeout = 300

#
# Allow to perform insecure SSL requests to cinder.
#
# If this option is set to True, HTTPS endpoint connection is verified using the
# CA certificates file specified by ``cinder_ca_certificates_file`` option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * cinder_ca_certificates_file
#
#  (boolean value)
#cinder_api_insecure = false

#
# The address where the cinder authentication service is listening.
#
# When all of ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, and ``cinder_store_password`` options are
# specified, the specified values are always used for the authentication.
# This is useful to hide the image volumes from users by storing them in a
# project/tenant specific to the image service. It also enables users to share
# the image volume among other projects under the control of glance's ACL.
#
# If either of these options are not set, the cinder endpoint is looked up
# from the service catalog, and current context's user and project are used.
#
# Possible values:
#     * A valid authentication service address, for example:
#       ``http://openstack.example.org/identity/v2.0``
#
# Related options:
#     * cinder_store_user_name
#     * cinder_store_password
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_auth_address = <None>

#
# User name to authenticate against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
#     * A valid user name
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_password
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_user_name = <None>

#
# Password for the user authenticating against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
#     * A valid password for the user specified by ``cinder_store_user_name``
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_password = <None>

#
# Project name where the image volume is stored in cinder.
#
# If this configuration option is not set, the project in current context is
# used.
#
# This must be used with all the following related options. If any of these are
# not specified, the project of the current context is used.
#
# Possible values:
#     * A valid project name
#
# Related options:
#     * ``cinder_store_auth_address``
#     * ``cinder_store_user_name``
#     * ``cinder_store_password``
#
#  (string value)
#cinder_store_project_name = <None>

#
# Path to the rootwrap configuration file to use for running commands as root.
#
# The cinder store requires root privileges to operate the image volumes (for
# connecting to iSCSI/FC volumes and reading/writing the volume data, etc.).
# The configuration file should allow the required commands by cinder store and
# os-brick library.
#
# Possible values:
#     * Path to the rootwrap config file
#
# Related options:
#     * None
#
#  (string value)
#rootwrap_config = /etc/glance/rootwrap.conf

#
# Directory to which the filesystem backend store writes images.
#
# Upon start up, Glance creates the directory if it doesn't already
# exist and verifies write access to the user under which
# ``glance-api`` runs. If the write access isn't available, a
# ``BadStoreConfiguration`` exception is raised and the filesystem
# store may not be available for adding new images.
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
#     * A valid path to a directory
#
# Related options:
#     * ``filesystem_store_datadirs``
#     * ``filesystem_store_file_perm``
#
#  (string value)
#filesystem_store_datadir = /var/lib/glance/images

#
# List of directories and their priorities to which the filesystem
# backend store writes images.
#
# The filesystem store can be configured to store images in multiple
# directories as opposed to using a single directory specified by the
# ``filesystem_store_datadir`` configuration option. When using
# multiple directories, each directory can be given an optional
# priority to specify the preference order in which they should
# be used. Priority is an integer that is concatenated to the
# directory path with a colon where a higher value indicates higher
# priority. When two directories have the same priority, the directory
# with most free space is used. When no priority is specified, it
# defaults to zero.
#
# More information on configuring filesystem store with multiple store
# directories can be found at
# http://docs.openstack.org/developer/glance/configuring.html
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
#     * List of strings of the following form:
#         * ``<a valid directory path>:<optional integer priority>``
#
# Related options:
#     * ``filesystem_store_datadir``
#     * ``filesystem_store_file_perm``
#
#  (multi valued)
#filesystem_store_datadirs =

#
# Filesystem store metadata file.
#
# The path to a file which contains the metadata to be returned with
# any location associated with the filesystem store. The file must
# contain a valid JSON object. The object should contain the keys
# ``id`` and ``mountpoint``. The value for both keys should be a
# string.
#
# Possible values:
#     * A valid path to the store metadata file
#
# Related options:
#     * None
#
#  (string value)
#filesystem_store_metadata_file = <None>

#
# File access permissions for the image files.
#
# Set the intended file access permissions for image data. This provides
# a way to enable other services, e.g. Nova, to consume images directly
# from the filesystem store. The users running the services that are
# intended to be given access to could be made a member of the group
# that owns the files created. Assigning a value less then or equal to
# zero for this configuration option signifies that no changes be made
# to the  default permissions. This value will be decoded as an octal
# digit.
#
# For more information, please refer the documentation at
# http://docs.openstack.org/developer/glance/configuring.html
#
# Possible values:
#     * A valid file access permission
#     * Zero
#     * Any negative integer
#
# Related options:
#     * None
#
#  (integer value)
#filesystem_store_file_perm = 0

#
# Path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Certificate Authority file to verify the remote server certificate. If
# this option is set, the ``https_insecure`` option will be ignored and
# the CA file specified will be used to authenticate the server
# certificate and establish a secure connection to the server.
#
# Possible values:
#     * A valid path to a CA file
#
# Related options:
#     * https_insecure
#
#  (string value)
#https_ca_certificates_file = <None>

#
# Set verification of the remote server certificate.
#
# This configuration option takes in a boolean value to determine
# whether or not to verify the remote server certificate. If set to
# True, the remote server certificate is not verified. If the option is
# set to False, then the default CA truststore is used for verification.
#
# This option is ignored if ``https_ca_certificates_file`` is set.
# The remote server certificate will then be verified using the file
# specified using the ``https_ca_certificates_file`` option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * https_ca_certificates_file
#
#  (boolean value)
#https_insecure = true

#
# The http/https proxy information to be used to connect to the remote
# server.
#
# This configuration option specifies the http/https proxy information
# that should be used to connect to the remote server. The proxy
# information should be a key value pair of the scheme and proxy, for
# example, http:10.0.0.1:3128. You can also specify proxies for multiple
# schemes by separating the key value pairs with a comma, for example,
# http:10.0.0.1:3128, https:10.0.0.1:1080.
#
# Possible values:
#     * A comma separated list of scheme:proxy pairs as described above
#
# Related options:
#     * None
#
#  (dict value)
#http_proxy_information =

#
# Size, in megabytes, to chunk RADOS images into.
#
# Provide an integer value representing the size in megabytes to chunk
# Glance images into. The default chunk size is 8 megabytes. For optimal
# performance, the value should be a power of two.
#
# When Ceph's RBD object storage system is used as the storage backend
# for storing Glance images, the images are chunked into objects of the
# size set using this option. These chunked objects are then stored
# across the distributed block data store to use for Glance.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#rbd_store_chunk_size = 8

#
# RADOS pool in which images are stored.
#
# When RBD is used as the storage backend for storing Glance images, the
# images are stored by means of logical grouping of the objects (chunks
# of images) into a ``pool``. Each pool is defined with the number of
# placement groups it can contain. The default pool that is used is
# 'images'.
#
# More information on the RBD storage backend can be found here:
# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
#
# Possible Values:
#     * A valid pool name
#
# Related options:
#     * None
#
#  (string value)
#rbd_store_pool = images

#
# RADOS user to authenticate as.
#
# This configuration option takes in the RADOS user to authenticate as.
# This is only needed when RADOS authentication is enabled and is
# applicable only if the user is using Cephx authentication. If the
# value for this option is not set by the user or is set to None, a
# default value will be chosen, which will be based on the client.
# section in rbd_store_ceph_conf.
#
# Possible Values:
#     * A valid RADOS user
#
# Related options:
#     * rbd_store_ceph_conf
#
#  (string value)
#rbd_store_user = <None>

#
# Ceph configuration file path.
#
# This configuration option takes in the path to the Ceph configuration
# file to be used. If the value for this option is not set by the user
# or is set to None, librados will locate the default configuration file
# which is located at /etc/ceph/ceph.conf. If using Cephx
# authentication, this file should include a reference to the right
# keyring in a client.<USER> section
#
# Possible Values:
#     * A valid path to a configuration file
#
# Related options:
#     * rbd_store_user
#
#  (string value)
#rbd_store_ceph_conf = /etc/ceph/ceph.conf

#
# Timeout value for connecting to Ceph cluster.
#
# This configuration option takes in the timeout value in seconds used
# when connecting to the Ceph cluster i.e. it sets the time to wait for
# glance-api before closing the connection. This prevents glance-api
# hangups during the connection to RBD. If the value for this option
# is set to less than or equal to 0, no timeout is set and the default
# librados value is used.
#
# Possible Values:
#     * Any integer value
#
# Related options:
#     * None
#
#  (integer value)
#rados_connect_timeout = 0

#
# Chunk size for images to be stored in Sheepdog data store.
#
# Provide an integer value representing the size in mebibyte
# (1048576 bytes) to chunk Glance images into. The default
# chunk size is 64 mebibytes.
#
# When using Sheepdog distributed storage system, the images are
# chunked into objects of this size and then stored across the
# distributed data store to use for Glance.
#
# Chunk sizes, if a power of two, help avoid fragmentation and
# enable improved performance.
#
# Possible values:
#     * Positive integer value representing size in mebibytes.
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 1
#sheepdog_store_chunk_size = 64

#
# Port number on which the sheep daemon will listen.
#
# Provide an integer value representing a valid port number on
# which you want the Sheepdog daemon to listen on. The default
# port is 7000.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages it receives on
# the port number set using ``sheepdog_store_port`` option to store
# chunks of Glance images.
#
# Possible values:
#     * A valid port number (0 to 65535)
#
# Related Options:
#     * sheepdog_store_address
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000

#
# Address to bind the Sheepdog daemon to.
#
# Provide a string value representing the address to bind the
# Sheepdog daemon to. The default address set for the 'sheep'
# is 127.0.0.1.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages directed to the
# address set using ``sheepdog_store_address`` option to store
# chunks of Glance images.
#
# Possible values:
#     * A valid IPv4 address
#     * A valid IPv6 address
#     * A valid hostname
#
# Related Options:
#     * sheepdog_store_port
#
#  (string value)
#sheepdog_store_address = 127.0.0.1

#
# Set verification of the server certificate.
#
# This boolean determines whether or not to verify the server
# certificate. If this option is set to True, swiftclient won't check
# for a valid SSL certificate when authenticating. If the option is set
# to False, then the default CA truststore is used for verification.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * swift_store_cacert
#
#  (boolean value)
#swift_store_auth_insecure = false

#
# Path to the CA bundle file.
#
# This configuration option enables the operator to specify the path to
# a custom Certificate Authority file for SSL verification when
# connecting to Swift.
#
# Possible values:
#     * A valid path to a CA file
#
# Related options:
#     * swift_store_auth_insecure
#
#  (string value)
#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt

#
# The region of Swift endpoint to use by Glance.
#
# Provide a string value representing a Swift region where Glance
# can connect to for image storage. By default, there is no region
# set.
#
# When Glance uses Swift as the storage backend to store images
# for a specific tenant that has multiple endpoints, setting of a
# Swift region with ``swift_store_region`` allows Glance to connect
# to Swift in the specified region as opposed to a single region
# connectivity.
#
# This option can be configured for both single-tenant and
# multi-tenant storage.
#
# NOTE: Setting the region with ``swift_store_region`` is
# tenant-specific and is necessary ``only if`` the tenant has
# multiple endpoints across different regions.
#
# Possible values:
#     * A string value representing a valid Swift region.
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_region = RegionTwo

#
# The URL endpoint to use for Swift backend storage.
#
# Provide a string value representing the URL endpoint to use for
# storing Glance images in Swift store. By default, an endpoint
# is not set and the storage URL returned by ``auth`` is used.
# Setting an endpoint with ``swift_store_endpoint`` overrides the
# storage URL and is used for Glance image storage.
#
# NOTE: The URL should include the path up to, but excluding the
# container. The location of an object is obtained by appending
# the container and object to the configured URL.
#
# Possible values:
#     * String value representing a valid URL path up to a Swift container
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name

#
# Endpoint Type of Swift service.
#
# This string value indicates the endpoint type to use to fetch the
# Swift endpoint. The endpoint type determines the actions the user will
# be allowed to perform, for instance, reading and writing to the Store.
# This setting is only used if swift_store_auth_version is greater than
# 1.
#
# Possible values:
#     * publicURL
#     * adminURL
#     * internalURL
#
# Related options:
#     * swift_store_endpoint
#
#  (string value)
# Allowed values: publicURL, adminURL, internalURL
#swift_store_endpoint_type = publicURL

#
# Type of Swift service to use.
#
# Provide a string value representing the service type to use for
# storing images while using Swift backend storage. The default
# service type is set to ``object-store``.
#
# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
# this configuration option needs to be ``object-store``. If using
# a higher version of Keystone or a different auth scheme, this
# option may be modified.
#
# Possible values:
#     * A string representing a valid service type for Swift storage.
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_service_type = object-store

#
# Name of single container to store images/name prefix for multiple containers
#
# When a single container is being used to store images, this configuration
# option indicates the container within the Glance account to be used for
# storing all images. When multiple containers are used to store images, this
# will be the name prefix for all containers. Usage of single/multiple
# containers can be controlled using the configuration option
# ``swift_store_multiple_containers_seed``.
#
# When using multiple containers, the containers will be named after the value
# set for this configuration option with the first N chars of the image UUID
# as the suffix delimited by an underscore (where N is specified by
# ``swift_store_multiple_containers_seed``).
#
# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
# the container ``glance_fda``. All dashes in the UUID are included when
# creating the container name but do not count toward the character limit, so
# when N=10 the container name would be ``glance_fdae39a1-ba.``
#
# Possible values:
#     * If using single container, this configuration option can be any string
#       that is a valid swift container name in Glance's Swift account
#     * If using multiple containers, this configuration option can be any
#       string as long as it satisfies the container naming rules enforced by
#       Swift. The value of ``swift_store_multiple_containers_seed`` should be
#       taken into account as well.
#
# Related options:
#     * ``swift_store_multiple_containers_seed``
#     * ``swift_store_multi_tenant``
#     * ``swift_store_create_container_on_put``
#
#  (string value)
#swift_store_container = glance

#
# The size threshold, in MB, after which Glance will start segmenting image
# data.
#
# Swift has an upper limit on the size of a single uploaded object. By default,
# this is 5GB. To upload objects bigger than this limit, objects are segmented
# into multiple smaller objects that are tied together with a manifest file.
# For more detail, refer to
# http://docs.openstack.org/developer/swift/overview_large_objects.html
#
# This configuration option specifies the size threshold over which the Swift
# driver will start segmenting image data into multiple smaller files.
# Currently, the Swift driver only supports creating Dynamic Large Objects.
#
# NOTE: This should be set by taking into account the large object limit
# enforced by the Swift cluster in consideration.
#
# Possible values:
#     * A positive integer that is less than or equal to the large object limit
#       enforced by the Swift cluster in consideration.
#
# Related options:
#     * ``swift_store_large_object_chunk_size``
#
#  (integer value)
# Minimum value: 1
#swift_store_large_object_size = 5120

#
# The maximum size, in MB, of the segments when image data is segmented.
#
# When image data is segmented to upload images that are larger than the limit
# enforced by the Swift cluster, image data is broken into segments that are no
# bigger than the size specified by this configuration option.
# Refer to ``swift_store_large_object_size`` for more detail.
#
# For example: if ``swift_store_large_object_size`` is 5GB and
# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
# segmented into 7 segments where the first six segments will be 1GB in size and
# the seventh segment will be 0.2GB.
#
# Possible values:
#     * A positive integer that is less than or equal to the large object limit
#       enforced by Swift cluster in consideration.
#
# Related options:
#     * ``swift_store_large_object_size``
#
#  (integer value)
# Minimum value: 1
#swift_store_large_object_chunk_size = 200

#
# Create container, if it doesn't already exist, when uploading image.
#
# At the time of uploading an image, if the corresponding container doesn't
# exist, it will be created provided this configuration option is set to True.
# By default, it won't be created. This behavior is applicable for both single
# and multiple containers mode.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#swift_store_create_container_on_put = false

#
# Store images in tenant's Swift account.
#
# This enables multi-tenant storage mode which causes Glance images to be stored
# in tenant specific Swift accounts. If this is disabled, Glance stores all
# images in its own account. More details multi-tenant store can be found at
# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#swift_store_multi_tenant = false

#
# Seed indicating the number of containers to use for storing images.
#
# When using a single-tenant store, images can be stored in one or more than one
# containers. When set to 0, all images will be stored in one single container.
# When set to an integer value between 1 and 32, multiple containers will be
# used to store images. This configuration option will determine how many
# containers are created. The total number of containers that will be used is
# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
# will be used to store images.
#
# Please refer to ``swift_store_container`` for more detail on the naming
# convention. More detail about using multiple containers can be found at
# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
# multiple-containers.html
#
# NOTE: This is used only when swift_store_multi_tenant is disabled.
#
# Possible values:
#     * A non-negative integer less than or equal to 32
#
# Related options:
#     * ``swift_store_container``
#     * ``swift_store_multi_tenant``
#     * ``swift_store_create_container_on_put``
#
#  (integer value)
# Minimum value: 0
# Maximum value: 32
#swift_store_multiple_containers_seed = 0

#
# List of tenants that will be granted admin access.
#
# This is a list of tenants that will be granted read/write access on
# all Swift containers created by Glance in multi-tenant mode. The
# default value is an empty list.
#
# Possible values:
#     * A comma separated list of strings representing UUIDs of Keystone
#       projects/tenants
#
# Related options:
#     * None
#
#  (list value)
#swift_store_admin_tenants =

#
# SSL layer compression for HTTPS Swift requests.
#
# Provide a boolean value to determine whether or not to compress
# HTTPS Swift requests for images at the SSL layer. By default,
# compression is enabled.
#
# When using Swift as the backend store for Glance image storage,
# SSL layer compression of HTTPS Swift requests can be set using
# this option. If set to False, SSL layer compression of HTTPS
# Swift requests is disabled. Disabling this option may improve
# performance for images which are already in a compressed format,
# for example, qcow2.
#
# Possible values:
#     * True
#     * False
#
# Related Options:
#     * None
#
#  (boolean value)
#swift_store_ssl_compression = true

#
# The number of times a Swift download will be retried before the
# request fails.
#
# Provide an integer value representing the number of times an image
# download must be retried before erroring out. The default value is
# zero (no retry on a failed image download). When set to a positive
# integer value, ``swift_store_retry_get_count`` ensures that the
# download is attempted this many more times upon a download failure
# before sending an error message.
#
# Possible values:
#     * Zero
#     * Positive integer value
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#swift_store_retry_get_count = 0

#
# Time in seconds defining the size of the window in which a new
# token may be requested before the current token is due to expire.
#
# Typically, the Swift storage driver fetches a new token upon the
# expiration of the current token to ensure continued access to
# Swift. However, some Swift transactions (like uploading image
# segments) may not recover well if the token expires on the fly.
#
# Hence, by fetching a new token before the current token expiration,
# we make sure that the token does not expire or is close to expiry
# before a transaction is attempted. By default, the Swift storage
# driver requests for a new token 60 seconds or less before the
# current token expiration.
#
# Possible values:
#     * Zero
#     * Positive integer value
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#swift_store_expire_soon_interval = 60

#
# Use trusts for multi-tenant Swift store.
#
# This option instructs the Swift store to create a trust for each
# add/get request when the multi-tenant store is in use. Using trusts
# allows the Swift store to avoid problems that can be caused by an
# authentication token expiring during the upload or download of data.
#
# By default, ``swift_store_use_trusts`` is set to ``True``(use of
# trusts is enabled). If set to ``False``, a user token is used for
# the Swift connection instead, eliminating the overhead of trust
# creation.
#
# NOTE: This option is considered only when
# ``swift_store_multi_tenant`` is set to ``True``
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * swift_store_multi_tenant
#
#  (boolean value)
#swift_store_use_trusts = true

#
# Reference to default Swift account/backing store parameters.
#
# Provide a string value representing a reference to the default set
# of parameters required for using swift account/backing store for
# image storage. The default reference value for this configuration
# option is 'ref1'. This configuration option dereferences the
# parameters and facilitates image storage in Swift storage backend
# every time a new image is added.
#
# Possible values:
#     * A valid string value
#
# Related options:
#     * None
#
#  (string value)
#default_swift_reference = ref1

# DEPRECATED: Version of the authentication service to use. Valid versions are 2
# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_version' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_version = 2

# DEPRECATED: The address where the Swift authentication service is listening.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_address' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_address = <None>

# DEPRECATED: The user to authenticate against the Swift authentication service.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'user' in the Swift back-end configuration file is set instead.
#swift_store_user = <None>

# DEPRECATED: Auth key for the user authenticating against the Swift
# authentication service. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'key' in the Swift back-end configuration file is used
# to set the authentication key instead.
#swift_store_key = <None>

#
# Absolute path to the file containing the swift account(s)
# configurations.
#
# Include a string value representing the path to a configuration
# file that has references for each of the configured Swift
# account(s)/backing stores. By default, no file path is specified
# and customized Swift referencing is disabled. Configuring this
# option is highly recommended while using Swift storage backend for
# image storage as it avoids storage of credentials in the database.
#
# Possible values:
#     * String value representing an absolute path on the glance-api
#       node
#
# Related options:
#     * None
#
#  (string value)
#swift_store_config_file = <None>

#
# Address of the ESX/ESXi or vCenter Server target system.
#
# This configuration option sets the address of the ESX/ESXi or vCenter
# Server target system. This option is required when using the VMware
# storage backend. The address can contain an IP address (127.0.0.1) or
# a DNS name (www.my-domain.com).
#
# Possible Values:
#     * A valid IPv4 or IPv6 address
#     * A valid DNS name
#
# Related options:
#     * vmware_server_username
#     * vmware_server_password
#
#  (string value)
#vmware_server_host = 127.0.0.1

#
# Server username.
#
# This configuration option takes the username for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
#     * Any string that is the username for a user with appropriate
#       privileges
#
# Related options:
#     * vmware_server_host
#     * vmware_server_password
#
#  (string value)
#vmware_server_username = root

#
# Server password.
#
# This configuration option takes the password for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
#     * Any string that is a password corresponding to the username
#       specified using the "vmware_server_username" option
#
# Related options:
#     * vmware_server_host
#     * vmware_server_username
#
#  (string value)
#vmware_server_password = vmware

#
# The number of VMware API retries.
#
# This configuration option specifies the number of times the VMware
# ESX/VC server API must be retried upon connection related issues or
# server API call overload. It is not possible to specify 'retry
# forever'.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#vmware_api_retry_count = 10

#
# Interval in seconds used for polling remote tasks invoked on VMware
# ESX/VC server.
#
# This configuration option takes in the sleep time in seconds for polling an
# on-going async task as part of the VMWare ESX/VC server API call.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#vmware_task_poll_interval = 5

#
# The directory where the glance images will be stored in the datastore.
#
# This configuration option specifies the path to the directory where the
# glance images will be stored in the VMware datastore. If this option
# is not set,  the default directory where the glance images are stored
# is openstack_glance.
#
# Possible Values:
#     * Any string that is a valid path to a directory
#
# Related options:
#     * None
#
#  (string value)
#vmware_store_image_dir = /openstack_glance

#
# Set verification of the ESX/vCenter server certificate.
#
# This configuration option takes a boolean value to determine
# whether or not to verify the ESX/vCenter server certificate. If this
# option is set to True, the ESX/vCenter server certificate is not
# verified. If this option is set to False, then the default CA
# truststore is used for verification.
#
# This option is ignored if the "vmware_ca_file" option is set. In that
# case, the ESX/vCenter server certificate will then be verified using
# the file specified using the "vmware_ca_file" option .
#
# Possible Values:
#     * True
#     * False
#
# Related options:
#     * vmware_ca_file
#
#  (boolean value)
# Deprecated group/name - [glance_store]/vmware_api_insecure
#vmware_insecure = false

#
# Absolute path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Cerificate Authority File to verify the ESX/vCenter certificate.
#
# If this option is set, the "vmware_insecure" option will be ignored
# and the CA file specified will be used to authenticate the ESX/vCenter
# server certificate and establish a secure connection to the server.
#
# Possible Values:
#     * Any string that is a valid absolute path to a CA file
#
# Related options:
#     * vmware_insecure
#
#  (string value)
#vmware_ca_file = /etc/ssl/certs/ca-certificates.crt

#
# The datastores where the image can be stored.
#
# This configuration option specifies the datastores where the image can
# be stored in the VMWare store backend. This option may be specified
# multiple times for specifying multiple datastores. The datastore name
# should be specified after its datacenter path, separated by ":". An
# optional weight may be given after the datastore name, separated again
# by ":" to specify the priority. Thus, the required format becomes
# <datacenter_path>:<datastore_name>:<optional_weight>.
#
# When adding an image, the datastore with highest weight will be
# selected, unless there is not enough free space available in cases
# where the image size is already known. If no weight is given, it is
# assumed to be zero and the directory will be considered for selection
# last. If multiple datastores have the same weight, then the one with
# the most free space available is selected.
#
# Possible Values:
#     * Any string of the format:
#       <datacenter_path>:<datastore_name>:<optional_weight>
#
# Related options:
#    * None
#
#  (multi valued)
#vmware_datastores =


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
glance-manage.conf

Configuration options for the glance database management tool are found in the glance-manage.conf file.

Note

Options set in glance-manage.conf will override options of the same section and name set in glance-registry.conf and glance-api.conf. Similarly, options in glance-api.conf will override options set in glance-registry.conf.

[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[database]

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20

#
# From oslo.db.concurrency
#

# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false
glance-registry.conf

Configuration for the Image service’s registry, which stores the metadata about images, is found in the glance-registry.conf file.

This file must be modified after installation.

[DEFAULT]

#
# From glance.registry
#

#
# Set the image owner to tenant or the authenticated user.
#
# Assign a boolean value to determine the owner of an image. When set to
# True, the owner of the image is the tenant. When set to False, the
# owner of the image will be the authenticated user issuing the request.
# Setting it to False makes the image private to the associated user and
# sharing with other users within the same tenant (or "project")
# requires explicit image sharing via image membership.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#owner_is_tenant = true

#
# Role used to identify an authenticated user as administrator.
#
# Provide a string value representing a Keystone role to identify an
# administrative user. Users with this role will be granted
# administrative privileges. The default value for this option is
# 'admin'.
#
# Possible values:
#     * A string value which is a valid Keystone role
#
# Related options:
#     * None
#
#  (string value)
#admin_role = admin

#
# Allow limited access to unauthenticated users.
#
# Assign a boolean to determine API access for unathenticated
# users. When set to False, the API cannot be accessed by
# unauthenticated users. When set to True, unauthenticated users can
# access the API with read-only privileges. This however only applies
# when using ContextMiddleware.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#allow_anonymous_access = false

#
# Limit the request ID length.
#
# Provide  an integer value to limit the length of the request ID to
# the specified length. The default value is 64. Users can change this
# to any ineteger value between 0 and 16384 however keeping in mind that
# a larger value may flood the logs.
#
# Possible values:
#     * Integer value between 0 and 16384
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#max_request_id_length = 64

#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * image_property_quota
#
#  (boolean value)
#allow_additional_image_properties = true

#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_member_quota = 128

#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
#     * ``allow_additional_image_properties``
#
#  (integer value)
#image_property_quota = 128

#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_tag_quota = 128

#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_location_quota = 10

#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
#     * glance.db.sqlalchemy.api
#     * glance.db.registry.api
#     * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
#     * enable_v2_api
#     * enable_v2_registry
#
#  (string value)
#data_api = glance.db.sqlalchemy.api

#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
#     * The value of this configuration option may not be greater than
#       the value specified by ``api_limit_max``.
#     * Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * api_limit_max
#
#  (integer value)
# Minimum value: 1
#limit_param_default = 25

#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * limit_param_default
#
#  (integer value)
# Minimum value: 1
#api_limit_max = 1000

#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_multiple_locations
#     * location_strategy
#
#  (boolean value)
#show_image_direct_url = false

# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_image_direct_url
#     * location_strategy
#
#  (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false

#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
#     * This value should only be increased after careful
#       consideration and must be set less than or equal to
#       8 EiB (9223372036854775808).
#     * This value must be set with careful consideration of the
#       backend storage capacity. Setting this to a very low value
#       may result in a large number of image failures. And, setting
#       this to a very large value may result in faster consumption
#       of storage. Hence, this must be set according to the nature of
#       images created and storage capacity available.
#
# Possible values:
#     * Any positive number less than or equal to 9223372036854775808
#
#  (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776

#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
#     * A string that is a valid concatenation of a non-negative integer
#       representing the storage value and an optional string literal
#       representing storage units as mentioned above.
#
# Related options:
#     * None
#
#  (string value)
#user_storage_quota = 0

#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
#     * If this option is enabled, then ``enable_v1_registry`` must
#       also be set to ``True`` to enable mandatory usage of Registry
#       service with v1 API.
#
#     * If this option is disabled, then the ``enable_v1_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v2_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v2 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_registry
#     * enable_v2_api
#
#  (boolean value)
#enable_v1_api = true

#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
#     * If this option is disabled, then the ``enable_v2_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v1_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v1 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_registry
#     * enable_v1_api
#
#  (boolean value)
#enable_v2_api = true

#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
#     * Use of Registry is mandatory in v1 API, so this option must
#       be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
#     * If deploying only the v2 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_api
#
#  (boolean value)
#enable_v1_registry = true

#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
#     * Use of Registry is optional in v2 API, so this option
#       must only be enabled if both ``enable_v2_api`` is set to
#       ``True`` and the ``data_api`` option is set to
#       ``glance.db.registry.api``.
#
#     * If deploying only the v1 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_api
#     * data_api
#
#  (boolean value)
#enable_v2_registry = true

#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
#     * Valid hostname
#     * Valid IP address
#
# Related options:
#     * None
#
#  (string value)
#pydev_worker_debug_host = localhost

#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
#     * A valid port number
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678

#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
#     * String value representing a valid AES key
#
# Related options:
#     * None
#
#  (string value)
#metadata_encryption_key = <None>

#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
#     * An OpenSSL message digest algorithm identifier
#
# Relation options:
#     * None
#
#  (string value)
#digest_algorithm = sha256

#
# IP address to bind the glance servers to.
#
# Provide an IP address to bind the glance server to. The default
# value is ``0.0.0.0``.
#
# Edit this option to enable the server to listen on one particular
# IP address on the network card. This facilitates selection of a
# particular network interface for the server.
#
# Possible values:
#     * A valid IPv4 address
#     * A valid IPv6 address
#
# Related options:
#     * None
#
#  (string value)
#bind_host = 0.0.0.0

#
# Port number on which the server will listen.
#
# Provide a valid port number to bind the server's socket to. This
# port is then set to identify processes and forward network messages
# that arrive at the server. The default bind_port value for the API
# server is 9292 and for the registry server is 9191.
#
# Possible values:
#     * A valid port number (0 to 65535)
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#bind_port = <None>

#
# Set the number of incoming connection requests.
#
# Provide a positive integer value to limit the number of requests in
# the backlog queue. The default queue size is 4096.
#
# An incoming connection to a TCP listener socket is queued before a
# connection can be established with the server. Setting the backlog
# for a TCP socket ensures a limited queue size for incoming traffic.
#
# Possible values:
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#backlog = 4096

#
# Set the wait time before a connection recheck.
#
# Provide a positive integer value representing time in seconds which
# is set as the idle wait time before a TCP keep alive packet can be
# sent to the host. The default value is 600 seconds.
#
# Setting ``tcp_keepidle`` helps verify at regular intervals that a
# connection is intact and prevents frequent TCP connection
# reestablishment.
#
# Possible values:
#     * Positive integer value representing time in seconds
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#tcp_keepidle = 600

#
# Absolute path to the CA file.
#
# Provide a string value representing a valid absolute path to
# the Certificate Authority file to use for client authentication.
#
# A CA file typically contains necessary trusted certificates to
# use for the client authentication. This is essential to ensure
# that a secure connection is established to the server via the
# internet.
#
# Possible values:
#     * Valid absolute path to the CA file
#
# Related options:
#     * None
#
#  (string value)
#ca_file = /etc/ssl/cafile

#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file which is required to start the API service
# securely.
#
# A certificate file typically is a public key container and includes
# the server's public key, server name, server information and the
# signature which was a result of the verification process using the
# CA certificate. This is required for a secure connection
# establishment.
#
# Possible values:
#     * Valid absolute path to the certificate file
#
# Related options:
#     * None
#
#  (string value)
#cert_file = /etc/ssl/certs

#
# Absolute path to a private key file.
#
# Provide a string value representing a valid absolute path to a
# private key file which is required to establish the client-server
# connection.
#
# Possible values:
#     * Absolute path to the private key file
#
# Related options:
#     * None
#
#  (string value)
#key_file = /etc/ssl/key/key-file.pem

# DEPRECATED: The HTTP header used to determine the scheme for the original
# request, even if it was removed by an SSL terminating proxy. Typical value is
# "HTTP_X_FORWARDED_PROTO". (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use the http_proxy_to_wsgi middleware instead.
#secure_proxy_ssl_header = <None>

#
# Number of Glance worker processes to start.
#
# Provide a non-negative integer value to set the number of child
# process workers to service requests. By default, the number of CPUs
# available is set as the value for ``workers``.
#
# Each worker process is made to listen on the port set in the
# configuration file and contains a greenthread pool of size 1000.
#
# NOTE: Setting the number of workers to zero, triggers the creation
# of a single API process with a greenthread pool of size 1000.
#
# Possible values:
#     * 0
#     * Positive integer value (typically equal to the number of CPUs)
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#workers = <None>

#
# Maximum line size of message headers.
#
# Provide an integer value representing a length to limit the size of
# message headers. The default value is 16384.
#
# NOTE: ``max_header_line`` may need to be increased when using large
# tokens (typically those generated by the Keystone v3 API with big
# service catalogs). However, it is to be kept in mind that larger
# values for ``max_header_line`` would flood the logs.
#
# Setting ``max_header_line`` to 0 sets no limit for the line size of
# message headers.
#
# Possible values:
#     * 0
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#max_header_line = 16384

#
# Set keep alive option for HTTP over TCP.
#
# Provide a boolean value to determine sending of keep alive packets.
# If set to ``False``, the server returns the header
# "Connection: close". If set to ``True``, the server returns a
# "Connection: Keep-Alive" in its responses. This enables retention of
# the same TCP connection for HTTP conversations instead of opening a
# new one with each new request.
#
# This option must be set to ``False`` if the client socket connection
# needs to be closed explicitly after the response is received and
# read successfully by the client.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#http_keepalive = true

#
# Timeout for client connections' socket operations.
#
# Provide a valid integer value representing time in seconds to set
# the period of wait before an incoming connection can be closed. The
# default value is 900 seconds.
#
# The value zero implies wait forever.
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#client_socket_timeout = 900

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false

# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>

# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit

# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack


[database]

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20

#
# From oslo.db.concurrency
#

# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false


[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
# clients are redirected to this endpoint to authenticate. Although this
# endpoint should  ideally be unversioned, client support in the wild varies.
# If you're using a versioned v2 endpoint here, then this  should *not* be the
# same endpoint the service user utilizes  for validating tokens, because normal
# end users may not be  able to reach that endpoint. (string value)
#auth_uri = <None>

# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>

# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>

# How many times are we trying to reconnect when communicating with Identity API
# Server. (integer value)
#http_request_max_retries = 3

# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>

# Required if identity server requires client certificate (string value)
#certfile = <None>

# Required if identity server requires client certificate (string value)
#keyfile = <None>

# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# The region in which the identity server can be found. (string value)
#region_name = <None>

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>

# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>

# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set to
# -1 to disable caching completely. (integer value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is retrieved from
# the Identity service (in seconds). A high number of revocation events combined
# with a low cache duration may significantly reduce performance. Only valid for
# PKI tokens. (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None

# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>

# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every memcached server.
# (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it if
# not. "strict" like "permissive" but if the bind type is unknown the token will
# be rejected. "required" any form of token binding is needed to be allowed.
# Finally the name of a binding method that must be present in tokens. (string
# value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5

# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>


[matchmaker_redis]

#
# From oslo.messaging
#

# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1

# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379

# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =

# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =

# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq

# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000

# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000

# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false

# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =

# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>

# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false

# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =

# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =

# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The deadline for an rpc reply message delivery. Only used when caller does not
# provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic

# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-robin
# fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are messaging,
# messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than one
# RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin

# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost

# DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672

# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false

# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest

# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest

# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN

# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30

# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue. If
# you just want to make sure that all queues (except  those with auto-generated
# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically deleted.
# The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows unlimited
# messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the heartbeat.
# (integer value)
#heartbeat_rate = 2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false

# Maximum number of channels to allow (integer value)
#channel_max = <None>

# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>

# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3

# Enable SSL (boolean value)
#ssl = <None>

# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>

# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25

# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value)
#tcp_user_timeout = 0.25

# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25

# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single

# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30

# Maximum number of connections to create above `pool_max_size`. (integer value)
#pool_max_overflow = 0

# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30

# Lifetime of a connection (since creation) in seconds or None for no recycling.
# Expired connections are closed on acquire. (integer value)
#pool_recycle = 600

# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60

# Persist notification messages. (boolean value)
#notification_persistence = false

# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

# Max number of not acknowledged message which RabbitMQ can send to notification
# listener. (integer value)
#notification_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25

# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60

# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc

# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply

# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100

# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending reply.
# -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending reply.
# (floating point value)
#rpc_reply_retry_delay = 0.25

# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25


[oslo_messaging_zmq]

#
# From oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d


[paste_deploy]

#
# From glance.registry
#

#
# Deployment flavor to use in the server application pipeline.
#
# Provide a string value representing the appropriate deployment
# flavor used in the server application pipleline. This is typically
# the partial name of a pipeline in the paste configuration file with
# the service name removed.
#
# For example, if your paste section name in the paste configuration
# file is [pipeline:glance-api-keystone], set ``flavor`` to
# ``keystone``.
#
# Possible values:
#     * String value representing a partial pipeline name.
#
# Related Options:
#     * config_file
#
#  (string value)
#flavor = keystone

#
# Name of the paste configuration file.
#
# Provide a string value representing the name of the paste
# configuration file to use for configuring piplelines for
# server application deployments.
#
# NOTES:
#     * Provide the name or the path relative to the glance directory
#       for the paste configuration file and not the absolute path.
#     * The sample paste configuration file shipped with Glance need
#       not be edited in most cases as it comes with ready-made
#       pipelines for all common deployment flavors.
#
# If no value is specified for this option, the ``paste.ini`` file
# with the prefix of the corresponding Glance service's configuration
# file name will be searched for in the known configuration
# directories. (For example, if this option is missing from or has no
# value set in ``glance-api.conf``, the service will look for a file
# named ``glance-api-paste.ini``.) If the paste configuration file is
# not found, the service will not start.
#
# Possible values:
#     * A string value representing the name of the paste configuration
#       file.
#
# Related Options:
#     * flavor
#
#  (string value)
#config_file = glance-api-paste.ini


[profiler]

#
# From glance.registry
#

#
# Enables the profiling for all services on this node. Default value is False
# (fully disable the profiling feature).
#
# Possible values:
#
# * True: Enables the feature
# * False: Disables the feature. The profiling cannot be started via this
# project
# operations. If the profiling is triggered by another project, this project
# part
# will be empty.
#  (boolean value)
# Deprecated group/name - [profiler]/profiler_enabled
#enabled = false

#
# Enables SQL requests profiling in services. Default value is False (SQL
# requests won't be traced).
#
# Possible values:
#
# * True: Enables SQL requests profiling. Each SQL query will be part of the
# trace and can the be analyzed by how much time was spent for that.
# * False: Disables SQL requests profiling. The spent time is only shown on a
# higher level of operations. Single SQL queries cannot be analyzed this
# way.
#  (boolean value)
#trace_sqlalchemy = false

#
# Secret key(s) to use for encrypting context data for performance profiling.
# This string value should have the following format: <key1>[,<key2>,...<keyn>],
# where each key is some random string. A user who triggers the profiling via
# the REST API has to set one of these keys in the headers of the REST API call
# to include profiling results of this node for this particular project.
#
# Both "enabled" flag and "hmac_keys" config options should be set to enable
# profiling. Also, to generate correct profiling information across all services
# at least one key needs to be consistent between OpenStack projects. This
# ensures it can be used from client side to generate the trace, containing
# information from all possible resources. (string value)
#hmac_keys = SECRET_KEY

#
# Connection string for a notifier backend. Default value is messaging:// which
# sets the notifier to oslo_messaging.
#
# Examples of possible values:
#
# * messaging://: use oslo_messaging driver for sending notifications.
#  (string value)
#connection_string = messaging://
glance-registry-paste.ini

The Image service’s middleware pipeline for its registry is found in the glance-registry-paste.ini file.

# Use this pipeline for no auth - DEFAULT
[pipeline:glance-registry]
pipeline = healthcheck osprofiler unauthenticated-context registryapp

# Use this pipeline for keystone auth
[pipeline:glance-registry-keystone]
pipeline = healthcheck osprofiler authtoken context registryapp

# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-registry-trusted-auth]
pipeline = healthcheck osprofiler context registryapp

[app:registryapp]
paste.app_factory = glance.registry.api:API.factory

[filter:healthcheck]
paste.filter_factory = oslo_middleware:Healthcheck.factory
backends = disable_by_file
disable_by_file_path = /etc/glance/healthcheck_disable

[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory

[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory

[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
hmac_keys = SECRET_KEY  #DEPRECATED
enabled = yes  #DEPRECATED
glance-scrubber.conf

glance-scrubber is a utility for the Image service that cleans up images that have been deleted. Its configuration is stored in the glance-scrubber.conf file.

[DEFAULT]

#
# From glance.scrubber
#

#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * image_property_quota
#
#  (boolean value)
#allow_additional_image_properties = true

#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_member_quota = 128

#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
#     * ``allow_additional_image_properties``
#
#  (integer value)
#image_property_quota = 128

#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_tag_quota = 128

#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
#     * None
#
#  (integer value)
#image_location_quota = 10

#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
#     * glance.db.sqlalchemy.api
#     * glance.db.registry.api
#     * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
#     * enable_v2_api
#     * enable_v2_registry
#
#  (string value)
#data_api = glance.db.sqlalchemy.api

#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
#     * The value of this configuration option may not be greater than
#       the value specified by ``api_limit_max``.
#     * Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * api_limit_max
#
#  (integer value)
# Minimum value: 1
#limit_param_default = 25

#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
#       queries and increase response times. Setting this to a
#       very low value may result in poor user experience.
#
# Possible values:
#     * Any positive integer
#
# Related options:
#     * limit_param_default
#
#  (integer value)
# Minimum value: 1
#api_limit_max = 1000

#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_multiple_locations
#     * location_strategy
#
#  (boolean value)
#show_image_direct_url = false

# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
#     * Revealing image locations can present a GRAVE SECURITY RISK as
#       image locations can sometimes include credentials. Hence, this
#       is set to ``False`` by default. Set this to ``True`` with
#       EXTREME CAUTION and ONLY IF you know what you are doing!
#     * If an operator wishes to avoid showing any image location(s)
#       to the user, then both this option and
#       ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * show_image_direct_url
#     * location_strategy
#
#  (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false

#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
#     * This value should only be increased after careful
#       consideration and must be set less than or equal to
#       8 EiB (9223372036854775808).
#     * This value must be set with careful consideration of the
#       backend storage capacity. Setting this to a very low value
#       may result in a large number of image failures. And, setting
#       this to a very large value may result in faster consumption
#       of storage. Hence, this must be set according to the nature of
#       images created and storage capacity available.
#
# Possible values:
#     * Any positive number less than or equal to 9223372036854775808
#
#  (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776

#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
#     * A string that is a valid concatenation of a non-negative integer
#       representing the storage value and an optional string literal
#       representing storage units as mentioned above.
#
# Related options:
#     * None
#
#  (string value)
#user_storage_quota = 0

#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
#     * If this option is enabled, then ``enable_v1_registry`` must
#       also be set to ``True`` to enable mandatory usage of Registry
#       service with v1 API.
#
#     * If this option is disabled, then the ``enable_v1_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v2_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v2 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_registry
#     * enable_v2_api
#
#  (boolean value)
#enable_v1_api = true

#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
#     * If this option is disabled, then the ``enable_v2_registry``
#       option, which is enabled by default, is also recommended
#       to be disabled.
#
#     * This option is separate from ``enable_v1_api``, both v1 and v2
#       OpenStack Images API can be deployed independent of each
#       other.
#
#     * If deploying only the v1 Images API, this option, which is
#       enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_registry
#     * enable_v1_api
#
#  (boolean value)
#enable_v2_api = true

#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
#     * Use of Registry is mandatory in v1 API, so this option must
#       be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
#     * If deploying only the v2 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v1_api
#
#  (boolean value)
#enable_v1_registry = true

#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
#     * Use of Registry is optional in v2 API, so this option
#       must only be enabled if both ``enable_v2_api`` is set to
#       ``True`` and the ``data_api`` option is set to
#       ``glance.db.registry.api``.
#
#     * If deploying only the v1 OpenStack Images API, this option,
#       which is enabled by default, should be disabled.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * enable_v2_api
#     * data_api
#
#  (boolean value)
#enable_v2_registry = true

#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
#     * Valid hostname
#     * Valid IP address
#
# Related options:
#     * None
#
#  (string value)
#pydev_worker_debug_host = localhost

#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
#     * A valid port number
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678

#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
#     * String value representing a valid AES key
#
# Related options:
#     * None
#
#  (string value)
#metadata_encryption_key = <None>

#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
#     * An OpenSSL message digest algorithm identifier
#
# Relation options:
#     * None
#
#  (string value)
#digest_algorithm = sha256

#
# The amount of time, in seconds, to delay image scrubbing.
#
# When delayed delete is turned on, an image is put into ``pending_delete``
# state upon deletion until the scrubber deletes its image data. Typically, soon
# after the image is put into ``pending_delete`` state, it is available for
# scrubbing. However, scrubbing can be delayed until a later point using this
# configuration option. This option denotes the time period an image spends in
# ``pending_delete`` state before it is available for scrubbing.
#
# It is important to realize that this has storage implications. The larger the
# ``scrub_time``, the longer the time to reclaim backend storage from deleted
# images.
#
# Possible values:
#     * Any non-negative integer
#
# Related options:
#     * ``delayed_delete``
#
#  (integer value)
# Minimum value: 0
#scrub_time = 0

#
# The size of thread pool to be used for scrubbing images.
#
# When there are a large number of images to scrub, it is beneficial to scrub
# images in parallel so that the scrub queue stays in control and the backend
# storage is reclaimed in a timely fashion. This configuration option denotes
# the maximum number of images to be scrubbed in parallel. The default value is
# one, which signifies serial scrubbing. Any value above one indicates parallel
# scrubbing.
#
# Possible values:
#     * Any non-zero positive integer
#
# Related options:
#     * ``delayed_delete``
#
#  (integer value)
# Minimum value: 1
#scrub_pool_size = 1

#
# Turn on/off delayed delete.
#
# Typically when an image is deleted, the ``glance-api`` service puts the image
# into ``deleted`` state and deletes its data at the same time. Delayed delete
# is a feature in Glance that delays the actual deletion of image data until a
# later point in time (as determined by the configuration option
# ``scrub_time``).
# When delayed delete is turned on, the ``glance-api`` service puts the image
# into ``pending_delete`` state upon deletion and leaves the image data in the
# storage backend for the image scrubber to delete at a later time. The image
# scrubber will move the image into ``deleted`` state upon successful deletion
# of image data.
#
# NOTE: When delayed delete is turned on, image scrubber MUST be running as a
# periodic task to prevent the backend storage from filling up with undesired
# usage.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * ``scrub_time``
#     * ``wakeup_time``
#     * ``scrub_pool_size``
#
#  (boolean value)
#delayed_delete = false

#
# Role used to identify an authenticated user as administrator.
#
# Provide a string value representing a Keystone role to identify an
# administrative user. Users with this role will be granted
# administrative privileges. The default value for this option is
# 'admin'.
#
# Possible values:
#     * A string value which is a valid Keystone role
#
# Related options:
#     * None
#
#  (string value)
#admin_role = admin

#
# Send headers received from identity when making requests to
# registry.
#
# Typically, Glance registry can be deployed in multiple flavors,
# which may or may not include authentication. For example,
# ``trusted-auth`` is a flavor that does not require the registry
# service to authenticate the requests it receives. However, the
# registry service may still need a user context to be populated to
# serve the requests. This can be achieved by the caller
# (the Glance API usually) passing through the headers it received
# from authenticating with identity for the same request. The typical
# headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,
# ``X-Identity-Status`` and ``X-Service-Catalog``.
#
# Provide a boolean value to determine whether to send the identity
# headers to provide tenant and user information along with the
# requests to registry service. By default, this option is set to
# ``False``, which means that user and tenant information is not
# available readily. It must be obtained by authenticating. Hence, if
# this is set to ``False``, ``flavor`` must be set to value that
# either includes authentication or authenticated user context.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * flavor
#
#  (boolean value)
#send_identity_headers = false

#
# Time interval, in seconds, between scrubber runs in daemon mode.
#
# Scrubber can be run either as a cron job or daemon. When run as a daemon, this
# configuration time specifies the time period between two runs. When the
# scrubber wakes up, it fetches and scrubs all ``pending_delete`` images that
# are available for scrubbing after taking ``scrub_time`` into consideration.
#
# If the wakeup time is set to a large number, there may be a large number of
# images to be scrubbed for each run. Also, this impacts how quickly the backend
# storage is reclaimed.
#
# Possible values:
#     * Any non-negative integer
#
# Related options:
#     * ``daemon``
#     * ``delayed_delete``
#
#  (integer value)
# Minimum value: 0
#wakeup_time = 300

#
# Run scrubber as a daemon.
#
# This boolean configuration option indicates whether scrubber should
# run as a long-running process that wakes up at regular intervals to
# scrub images. The wake up interval can be specified using the
# configuration option ``wakeup_time``.
#
# If this configuration option is set to ``False``, which is the
# default value, scrubber runs once to scrub images and exits. In this
# case, if the operator wishes to implement continuous scrubbing of
# images, scrubber needs to be scheduled as a cron job.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * ``wakeup_time``
#
#  (boolean value)
#daemon = false

#
# Protocol to use for communication with the registry server.
#
# Provide a string value representing the protocol to use for
# communication with the registry server. By default, this option is
# set to ``http`` and the connection is not secure.
#
# This option can be set to ``https`` to establish a secure connection
# to the registry server. In this case, provide a key to use for the
# SSL connection using the ``registry_client_key_file`` option. Also
# include the CA file and cert file using the options
# ``registry_client_ca_file`` and ``registry_client_cert_file``
# respectively.
#
# Possible values:
#     * http
#     * https
#
# Related options:
#     * registry_client_key_file
#     * registry_client_cert_file
#     * registry_client_ca_file
#
#  (string value)
# Allowed values: http, https
#registry_client_protocol = http

#
# Absolute path to the private key file.
#
# Provide a string value representing a valid absolute path to the
# private key file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
# environment variable may be set to a filepath of the key file.
#
# Possible values:
#     * String value representing a valid absolute path to the key
#       file.
#
# Related options:
#     * registry_client_protocol
#
#  (string value)
#registry_client_key_file = /etc/ssl/key/key-file.pem

#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
# environment variable may be set to a filepath of the certificate
# file.
#
# Possible values:
#     * String value representing a valid absolute path to the
#       certificate file.
#
# Related options:
#     * registry_client_protocol
#
#  (string value)
#registry_client_cert_file = /etc/ssl/certs/file.crt

#
# Absolute path to the Certificate Authority file.
#
# Provide a string value representing a valid absolute path to the
# certificate authority file to use for establishing a secure
# connection to the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
# environment variable may be set to a filepath of the CA file.
# This option is ignored if the ``registry_client_insecure`` option
# is set to ``True``.
#
# Possible values:
#     * String value representing a valid absolute path to the CA
#       file.
#
# Related options:
#     * registry_client_protocol
#     * registry_client_insecure
#
#  (string value)
#registry_client_ca_file = /etc/ssl/cafile/file.ca

#
# Set verification of the registry server certificate.
#
# Provide a boolean value to determine whether or not to validate
# SSL connections to the registry server. By default, this option
# is set to ``False`` and the SSL connections are validated.
#
# If set to ``True``, the connection to the registry server is not
# validated via a certifying authority and the
# ``registry_client_ca_file`` option is ignored. This is the
# registry's equivalent of specifying --insecure on the command line
# using glanceclient for the API.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * registry_client_protocol
#     * registry_client_ca_file
#
#  (boolean value)
#registry_client_insecure = false

#
# Timeout value for registry requests.
#
# Provide an integer value representing the period of time in seconds
# that the API server will wait for a registry request to complete.
# The default value is 600 seconds.
#
# A value of 0 implies that a request will never timeout.
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#registry_client_timeout = 600

# DEPRECATED: Whether to pass through the user token when making requests to the
# registry. To prevent failures with token expiration during big files upload,
# it is recommended to set this parameter to False.If "use_user_token" is not in
# effect, then admin credentials can be specified. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#use_user_token = true

# DEPRECATED: The administrators user name. If "use_user_token" is not in
# effect, then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_user = <None>

# DEPRECATED: The administrators password. If "use_user_token" is not in effect,
# then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_password = <None>

# DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
# not in effect, then admin tenant name can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_tenant_name = <None>

# DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
# effect and using keystone auth, then URL of keystone can be specified. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_url = <None>

# DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
# in effect, then auth strategy can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_strategy = noauth

# DEPRECATED: The region for the authentication service. If "use_user_token" is
# not in effect and using keystone auth, then region name can be specified.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_region = <None>

#
# Address the registry server is hosted on.
#
# Possible values:
#     * A valid IP or hostname
#
# Related options:
#     * None
#
#  (string value)
#registry_host = 0.0.0.0

#
# Port the registry server is listening on.
#
# Possible values:
#     * A valid port number
#
# Related options:
#     * None
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#registry_port = 9191

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[database]

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20

#
# From oslo.db.concurrency
#

# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false


[glance_store]

#
# From glance.store
#

#
# List of enabled Glance stores.
#
# Register the storage backends to use for storing disk images
# as a comma separated list. The default stores enabled for
# storing disk images with Glance are ``file`` and ``http``.
#
# Possible values:
#     * A comma separated list that could include:
#         * file
#         * http
#         * swift
#         * rbd
#         * sheepdog
#         * cinder
#         * vmware
#
# Related Options:
#     * default_store
#
#  (list value)
#stores = file,http

#
# The default scheme to use for storing images.
#
# Provide a string value representing the default scheme to use for
# storing images. If not set, Glance uses ``file`` as the default
# scheme to store images with the ``file`` store.
#
# NOTE: The value given for this configuration option must be a valid
# scheme for a store registered with the ``stores`` configuration
# option.
#
# Possible values:
#     * file
#     * filesystem
#     * http
#     * https
#     * swift
#     * swift+http
#     * swift+https
#     * swift+config
#     * rbd
#     * sheepdog
#     * cinder
#     * vsphere
#
# Related Options:
#     * stores
#
#  (string value)
# Allowed values: file, filesystem, http, https, swift, swift+http, swift+https, swift+config, rbd, sheepdog, cinder, vsphere
#default_store = file

#
# Minimum interval in seconds to execute updating dynamic storage
# capabilities based on current backend status.
#
# Provide an integer value representing time in seconds to set the
# minimum interval before an update of dynamic storage capabilities
# for a storage backend can be attempted. Setting
# ``store_capabilities_update_min_interval`` does not mean updates
# occur periodically based on the set interval. Rather, the update
# is performed at the elapse of this interval set, if an operation
# of the store is triggered.
#
# By default, this option is set to zero and is disabled. Provide an
# integer value greater than zero to enable this option.
#
# NOTE: For more information on store capabilities and their updates,
# please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo
# /store-capabilities.html
#
# For more information on setting up a particular store in your
# deplyment and help with the usage of this feature, please contact
# the storage driver maintainers listed here:
# http://docs.openstack.org/developer/glance_store/drivers/index.html
#
# Possible values:
#     * Zero
#     * Positive integer
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#store_capabilities_update_min_interval = 0

#
# Information to match when looking for cinder in the service catalog.
#
# When the ``cinder_endpoint_template`` is not set and any of
# ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, ``cinder_store_password`` is not set,
# cinder store uses this information to lookup cinder endpoint from the service
# catalog in the current context. ``cinder_os_region_name``, if set, is taken
# into consideration to fetch the appropriate endpoint.
#
# The service catalog can be listed by the ``openstack catalog list`` command.
#
# Possible values:
#     * A string of of the following form:
#       ``<service_type>:<service_name>:<endpoint_type>``
#       At least ``service_type`` and ``endpoint_type`` should be specified.
#       ``service_name`` can be omitted.
#
# Related options:
#     * cinder_os_region_name
#     * cinder_endpoint_template
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#     * cinder_store_password
#
#  (string value)
#cinder_catalog_info = volumev2::publicURL

#
# Override service catalog lookup with template for cinder endpoint.
#
# When this option is set, this value is used to generate cinder endpoint,
# instead of looking up from the service catalog.
# This value is ignored if ``cinder_store_auth_address``,
# ``cinder_store_user_name``, ``cinder_store_project_name``, and
# ``cinder_store_password`` are specified.
#
# If this configuration option is set, ``cinder_catalog_info`` will be ignored.
#
# Possible values:
#     * URL template string for cinder endpoint, where ``%%(tenant)s`` is
#       replaced with the current tenant (project) name.
#       For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s``
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#     * cinder_store_password
#     * cinder_catalog_info
#
#  (string value)
#cinder_endpoint_template = <None>

#
# Region name to lookup cinder service from the service catalog.
#
# This is used only when ``cinder_catalog_info`` is used for determining the
# endpoint. If set, the lookup for cinder endpoint by this node is filtered to
# the specified region. It is useful when multiple regions are listed in the
# catalog. If this is not set, the endpoint is looked up from every region.
#
# Possible values:
#     * A string that is a valid region name.
#
# Related options:
#     * cinder_catalog_info
#
#  (string value)
# Deprecated group/name - [glance_store]/os_region_name
#cinder_os_region_name = <None>

#
# Location of a CA certificates file used for cinder client requests.
#
# The specified CA certificates file, if set, is used to verify cinder
# connections via HTTPS endpoint. If the endpoint is HTTP, this value is
# ignored.
# ``cinder_api_insecure`` must be set to ``True`` to enable the verification.
#
# Possible values:
#     * Path to a ca certificates file
#
# Related options:
#     * cinder_api_insecure
#
#  (string value)
#cinder_ca_certificates_file = <None>

#
# Number of cinderclient retries on failed http calls.
#
# When a call failed by any errors, cinderclient will retry the call up to the
# specified times after sleeping a few seconds.
#
# Possible values:
#     * A positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#cinder_http_retries = 3

#
# Time period, in seconds, to wait for a cinder volume transition to
# complete.
#
# When the cinder volume is created, deleted, or attached to the glance node to
# read/write the volume data, the volume's state is changed. For example, the
# newly created volume status changes from ``creating`` to ``available`` after
# the creation process is completed. This specifies the maximum time to wait for
# the status change. If a timeout occurs while waiting, or the status is changed
# to an unexpected value (e.g. `error``), the image creation fails.
#
# Possible values:
#     * A positive integer
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 0
#cinder_state_transition_timeout = 300

#
# Allow to perform insecure SSL requests to cinder.
#
# If this option is set to True, HTTPS endpoint connection is verified using the
# CA certificates file specified by ``cinder_ca_certificates_file`` option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * cinder_ca_certificates_file
#
#  (boolean value)
#cinder_api_insecure = false

#
# The address where the cinder authentication service is listening.
#
# When all of ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, and ``cinder_store_password`` options are
# specified, the specified values are always used for the authentication.
# This is useful to hide the image volumes from users by storing them in a
# project/tenant specific to the image service. It also enables users to share
# the image volume among other projects under the control of glance's ACL.
#
# If either of these options are not set, the cinder endpoint is looked up
# from the service catalog, and current context's user and project are used.
#
# Possible values:
#     * A valid authentication service address, for example:
#       ``http://openstack.example.org/identity/v2.0``
#
# Related options:
#     * cinder_store_user_name
#     * cinder_store_password
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_auth_address = <None>

#
# User name to authenticate against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
#     * A valid user name
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_password
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_user_name = <None>

#
# Password for the user authenticating against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
#     * A valid password for the user specified by ``cinder_store_user_name``
#
# Related options:
#     * cinder_store_auth_address
#     * cinder_store_user_name
#     * cinder_store_project_name
#
#  (string value)
#cinder_store_password = <None>

#
# Project name where the image volume is stored in cinder.
#
# If this configuration option is not set, the project in current context is
# used.
#
# This must be used with all the following related options. If any of these are
# not specified, the project of the current context is used.
#
# Possible values:
#     * A valid project name
#
# Related options:
#     * ``cinder_store_auth_address``
#     * ``cinder_store_user_name``
#     * ``cinder_store_password``
#
#  (string value)
#cinder_store_project_name = <None>

#
# Path to the rootwrap configuration file to use for running commands as root.
#
# The cinder store requires root privileges to operate the image volumes (for
# connecting to iSCSI/FC volumes and reading/writing the volume data, etc.).
# The configuration file should allow the required commands by cinder store and
# os-brick library.
#
# Possible values:
#     * Path to the rootwrap config file
#
# Related options:
#     * None
#
#  (string value)
#rootwrap_config = /etc/glance/rootwrap.conf

#
# Directory to which the filesystem backend store writes images.
#
# Upon start up, Glance creates the directory if it doesn't already
# exist and verifies write access to the user under which
# ``glance-api`` runs. If the write access isn't available, a
# ``BadStoreConfiguration`` exception is raised and the filesystem
# store may not be available for adding new images.
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
#     * A valid path to a directory
#
# Related options:
#     * ``filesystem_store_datadirs``
#     * ``filesystem_store_file_perm``
#
#  (string value)
#filesystem_store_datadir = /var/lib/glance/images

#
# List of directories and their priorities to which the filesystem
# backend store writes images.
#
# The filesystem store can be configured to store images in multiple
# directories as opposed to using a single directory specified by the
# ``filesystem_store_datadir`` configuration option. When using
# multiple directories, each directory can be given an optional
# priority to specify the preference order in which they should
# be used. Priority is an integer that is concatenated to the
# directory path with a colon where a higher value indicates higher
# priority. When two directories have the same priority, the directory
# with most free space is used. When no priority is specified, it
# defaults to zero.
#
# More information on configuring filesystem store with multiple store
# directories can be found at
# http://docs.openstack.org/developer/glance/configuring.html
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
#     * List of strings of the following form:
#         * ``<a valid directory path>:<optional integer priority>``
#
# Related options:
#     * ``filesystem_store_datadir``
#     * ``filesystem_store_file_perm``
#
#  (multi valued)
#filesystem_store_datadirs =

#
# Filesystem store metadata file.
#
# The path to a file which contains the metadata to be returned with
# any location associated with the filesystem store. The file must
# contain a valid JSON object. The object should contain the keys
# ``id`` and ``mountpoint``. The value for both keys should be a
# string.
#
# Possible values:
#     * A valid path to the store metadata file
#
# Related options:
#     * None
#
#  (string value)
#filesystem_store_metadata_file = <None>

#
# File access permissions for the image files.
#
# Set the intended file access permissions for image data. This provides
# a way to enable other services, e.g. Nova, to consume images directly
# from the filesystem store. The users running the services that are
# intended to be given access to could be made a member of the group
# that owns the files created. Assigning a value less then or equal to
# zero for this configuration option signifies that no changes be made
# to the  default permissions. This value will be decoded as an octal
# digit.
#
# For more information, please refer the documentation at
# http://docs.openstack.org/developer/glance/configuring.html
#
# Possible values:
#     * A valid file access permission
#     * Zero
#     * Any negative integer
#
# Related options:
#     * None
#
#  (integer value)
#filesystem_store_file_perm = 0

#
# Path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Certificate Authority file to verify the remote server certificate. If
# this option is set, the ``https_insecure`` option will be ignored and
# the CA file specified will be used to authenticate the server
# certificate and establish a secure connection to the server.
#
# Possible values:
#     * A valid path to a CA file
#
# Related options:
#     * https_insecure
#
#  (string value)
#https_ca_certificates_file = <None>

#
# Set verification of the remote server certificate.
#
# This configuration option takes in a boolean value to determine
# whether or not to verify the remote server certificate. If set to
# True, the remote server certificate is not verified. If the option is
# set to False, then the default CA truststore is used for verification.
#
# This option is ignored if ``https_ca_certificates_file`` is set.
# The remote server certificate will then be verified using the file
# specified using the ``https_ca_certificates_file`` option.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * https_ca_certificates_file
#
#  (boolean value)
#https_insecure = true

#
# The http/https proxy information to be used to connect to the remote
# server.
#
# This configuration option specifies the http/https proxy information
# that should be used to connect to the remote server. The proxy
# information should be a key value pair of the scheme and proxy, for
# example, http:10.0.0.1:3128. You can also specify proxies for multiple
# schemes by separating the key value pairs with a comma, for example,
# http:10.0.0.1:3128, https:10.0.0.1:1080.
#
# Possible values:
#     * A comma separated list of scheme:proxy pairs as described above
#
# Related options:
#     * None
#
#  (dict value)
#http_proxy_information =

#
# Size, in megabytes, to chunk RADOS images into.
#
# Provide an integer value representing the size in megabytes to chunk
# Glance images into. The default chunk size is 8 megabytes. For optimal
# performance, the value should be a power of two.
#
# When Ceph's RBD object storage system is used as the storage backend
# for storing Glance images, the images are chunked into objects of the
# size set using this option. These chunked objects are then stored
# across the distributed block data store to use for Glance.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#rbd_store_chunk_size = 8

#
# RADOS pool in which images are stored.
#
# When RBD is used as the storage backend for storing Glance images, the
# images are stored by means of logical grouping of the objects (chunks
# of images) into a ``pool``. Each pool is defined with the number of
# placement groups it can contain. The default pool that is used is
# 'images'.
#
# More information on the RBD storage backend can be found here:
# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
#
# Possible Values:
#     * A valid pool name
#
# Related options:
#     * None
#
#  (string value)
#rbd_store_pool = images

#
# RADOS user to authenticate as.
#
# This configuration option takes in the RADOS user to authenticate as.
# This is only needed when RADOS authentication is enabled and is
# applicable only if the user is using Cephx authentication. If the
# value for this option is not set by the user or is set to None, a
# default value will be chosen, which will be based on the client.
# section in rbd_store_ceph_conf.
#
# Possible Values:
#     * A valid RADOS user
#
# Related options:
#     * rbd_store_ceph_conf
#
#  (string value)
#rbd_store_user = <None>

#
# Ceph configuration file path.
#
# This configuration option takes in the path to the Ceph configuration
# file to be used. If the value for this option is not set by the user
# or is set to None, librados will locate the default configuration file
# which is located at /etc/ceph/ceph.conf. If using Cephx
# authentication, this file should include a reference to the right
# keyring in a client.<USER> section
#
# Possible Values:
#     * A valid path to a configuration file
#
# Related options:
#     * rbd_store_user
#
#  (string value)
#rbd_store_ceph_conf = /etc/ceph/ceph.conf

#
# Timeout value for connecting to Ceph cluster.
#
# This configuration option takes in the timeout value in seconds used
# when connecting to the Ceph cluster i.e. it sets the time to wait for
# glance-api before closing the connection. This prevents glance-api
# hangups during the connection to RBD. If the value for this option
# is set to less than or equal to 0, no timeout is set and the default
# librados value is used.
#
# Possible Values:
#     * Any integer value
#
# Related options:
#     * None
#
#  (integer value)
#rados_connect_timeout = 0

#
# Chunk size for images to be stored in Sheepdog data store.
#
# Provide an integer value representing the size in mebibyte
# (1048576 bytes) to chunk Glance images into. The default
# chunk size is 64 mebibytes.
#
# When using Sheepdog distributed storage system, the images are
# chunked into objects of this size and then stored across the
# distributed data store to use for Glance.
#
# Chunk sizes, if a power of two, help avoid fragmentation and
# enable improved performance.
#
# Possible values:
#     * Positive integer value representing size in mebibytes.
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 1
#sheepdog_store_chunk_size = 64

#
# Port number on which the sheep daemon will listen.
#
# Provide an integer value representing a valid port number on
# which you want the Sheepdog daemon to listen on. The default
# port is 7000.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages it receives on
# the port number set using ``sheepdog_store_port`` option to store
# chunks of Glance images.
#
# Possible values:
#     * A valid port number (0 to 65535)
#
# Related Options:
#     * sheepdog_store_address
#
#  (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000

#
# Address to bind the Sheepdog daemon to.
#
# Provide a string value representing the address to bind the
# Sheepdog daemon to. The default address set for the 'sheep'
# is 127.0.0.1.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages directed to the
# address set using ``sheepdog_store_address`` option to store
# chunks of Glance images.
#
# Possible values:
#     * A valid IPv4 address
#     * A valid IPv6 address
#     * A valid hostname
#
# Related Options:
#     * sheepdog_store_port
#
#  (string value)
#sheepdog_store_address = 127.0.0.1

#
# Set verification of the server certificate.
#
# This boolean determines whether or not to verify the server
# certificate. If this option is set to True, swiftclient won't check
# for a valid SSL certificate when authenticating. If the option is set
# to False, then the default CA truststore is used for verification.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * swift_store_cacert
#
#  (boolean value)
#swift_store_auth_insecure = false

#
# Path to the CA bundle file.
#
# This configuration option enables the operator to specify the path to
# a custom Certificate Authority file for SSL verification when
# connecting to Swift.
#
# Possible values:
#     * A valid path to a CA file
#
# Related options:
#     * swift_store_auth_insecure
#
#  (string value)
#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt

#
# The region of Swift endpoint to use by Glance.
#
# Provide a string value representing a Swift region where Glance
# can connect to for image storage. By default, there is no region
# set.
#
# When Glance uses Swift as the storage backend to store images
# for a specific tenant that has multiple endpoints, setting of a
# Swift region with ``swift_store_region`` allows Glance to connect
# to Swift in the specified region as opposed to a single region
# connectivity.
#
# This option can be configured for both single-tenant and
# multi-tenant storage.
#
# NOTE: Setting the region with ``swift_store_region`` is
# tenant-specific and is necessary ``only if`` the tenant has
# multiple endpoints across different regions.
#
# Possible values:
#     * A string value representing a valid Swift region.
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_region = RegionTwo

#
# The URL endpoint to use for Swift backend storage.
#
# Provide a string value representing the URL endpoint to use for
# storing Glance images in Swift store. By default, an endpoint
# is not set and the storage URL returned by ``auth`` is used.
# Setting an endpoint with ``swift_store_endpoint`` overrides the
# storage URL and is used for Glance image storage.
#
# NOTE: The URL should include the path up to, but excluding the
# container. The location of an object is obtained by appending
# the container and object to the configured URL.
#
# Possible values:
#     * String value representing a valid URL path up to a Swift container
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name

#
# Endpoint Type of Swift service.
#
# This string value indicates the endpoint type to use to fetch the
# Swift endpoint. The endpoint type determines the actions the user will
# be allowed to perform, for instance, reading and writing to the Store.
# This setting is only used if swift_store_auth_version is greater than
# 1.
#
# Possible values:
#     * publicURL
#     * adminURL
#     * internalURL
#
# Related options:
#     * swift_store_endpoint
#
#  (string value)
# Allowed values: publicURL, adminURL, internalURL
#swift_store_endpoint_type = publicURL

#
# Type of Swift service to use.
#
# Provide a string value representing the service type to use for
# storing images while using Swift backend storage. The default
# service type is set to ``object-store``.
#
# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
# this configuration option needs to be ``object-store``. If using
# a higher version of Keystone or a different auth scheme, this
# option may be modified.
#
# Possible values:
#     * A string representing a valid service type for Swift storage.
#
# Related Options:
#     * None
#
#  (string value)
#swift_store_service_type = object-store

#
# Name of single container to store images/name prefix for multiple containers
#
# When a single container is being used to store images, this configuration
# option indicates the container within the Glance account to be used for
# storing all images. When multiple containers are used to store images, this
# will be the name prefix for all containers. Usage of single/multiple
# containers can be controlled using the configuration option
# ``swift_store_multiple_containers_seed``.
#
# When using multiple containers, the containers will be named after the value
# set for this configuration option with the first N chars of the image UUID
# as the suffix delimited by an underscore (where N is specified by
# ``swift_store_multiple_containers_seed``).
#
# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
# the container ``glance_fda``. All dashes in the UUID are included when
# creating the container name but do not count toward the character limit, so
# when N=10 the container name would be ``glance_fdae39a1-ba.``
#
# Possible values:
#     * If using single container, this configuration option can be any string
#       that is a valid swift container name in Glance's Swift account
#     * If using multiple containers, this configuration option can be any
#       string as long as it satisfies the container naming rules enforced by
#       Swift. The value of ``swift_store_multiple_containers_seed`` should be
#       taken into account as well.
#
# Related options:
#     * ``swift_store_multiple_containers_seed``
#     * ``swift_store_multi_tenant``
#     * ``swift_store_create_container_on_put``
#
#  (string value)
#swift_store_container = glance

#
# The size threshold, in MB, after which Glance will start segmenting image
# data.
#
# Swift has an upper limit on the size of a single uploaded object. By default,
# this is 5GB. To upload objects bigger than this limit, objects are segmented
# into multiple smaller objects that are tied together with a manifest file.
# For more detail, refer to
# http://docs.openstack.org/developer/swift/overview_large_objects.html
#
# This configuration option specifies the size threshold over which the Swift
# driver will start segmenting image data into multiple smaller files.
# Currently, the Swift driver only supports creating Dynamic Large Objects.
#
# NOTE: This should be set by taking into account the large object limit
# enforced by the Swift cluster in consideration.
#
# Possible values:
#     * A positive integer that is less than or equal to the large object limit
#       enforced by the Swift cluster in consideration.
#
# Related options:
#     * ``swift_store_large_object_chunk_size``
#
#  (integer value)
# Minimum value: 1
#swift_store_large_object_size = 5120

#
# The maximum size, in MB, of the segments when image data is segmented.
#
# When image data is segmented to upload images that are larger than the limit
# enforced by the Swift cluster, image data is broken into segments that are no
# bigger than the size specified by this configuration option.
# Refer to ``swift_store_large_object_size`` for more detail.
#
# For example: if ``swift_store_large_object_size`` is 5GB and
# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
# segmented into 7 segments where the first six segments will be 1GB in size and
# the seventh segment will be 0.2GB.
#
# Possible values:
#     * A positive integer that is less than or equal to the large object limit
#       enforced by Swift cluster in consideration.
#
# Related options:
#     * ``swift_store_large_object_size``
#
#  (integer value)
# Minimum value: 1
#swift_store_large_object_chunk_size = 200

#
# Create container, if it doesn't already exist, when uploading image.
#
# At the time of uploading an image, if the corresponding container doesn't
# exist, it will be created provided this configuration option is set to True.
# By default, it won't be created. This behavior is applicable for both single
# and multiple containers mode.
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#swift_store_create_container_on_put = false

#
# Store images in tenant's Swift account.
#
# This enables multi-tenant storage mode which causes Glance images to be stored
# in tenant specific Swift accounts. If this is disabled, Glance stores all
# images in its own account. More details multi-tenant store can be found at
# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * None
#
#  (boolean value)
#swift_store_multi_tenant = false

#
# Seed indicating the number of containers to use for storing images.
#
# When using a single-tenant store, images can be stored in one or more than one
# containers. When set to 0, all images will be stored in one single container.
# When set to an integer value between 1 and 32, multiple containers will be
# used to store images. This configuration option will determine how many
# containers are created. The total number of containers that will be used is
# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
# will be used to store images.
#
# Please refer to ``swift_store_container`` for more detail on the naming
# convention. More detail about using multiple containers can be found at
# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
# multiple-containers.html
#
# NOTE: This is used only when swift_store_multi_tenant is disabled.
#
# Possible values:
#     * A non-negative integer less than or equal to 32
#
# Related options:
#     * ``swift_store_container``
#     * ``swift_store_multi_tenant``
#     * ``swift_store_create_container_on_put``
#
#  (integer value)
# Minimum value: 0
# Maximum value: 32
#swift_store_multiple_containers_seed = 0

#
# List of tenants that will be granted admin access.
#
# This is a list of tenants that will be granted read/write access on
# all Swift containers created by Glance in multi-tenant mode. The
# default value is an empty list.
#
# Possible values:
#     * A comma separated list of strings representing UUIDs of Keystone
#       projects/tenants
#
# Related options:
#     * None
#
#  (list value)
#swift_store_admin_tenants =

#
# SSL layer compression for HTTPS Swift requests.
#
# Provide a boolean value to determine whether or not to compress
# HTTPS Swift requests for images at the SSL layer. By default,
# compression is enabled.
#
# When using Swift as the backend store for Glance image storage,
# SSL layer compression of HTTPS Swift requests can be set using
# this option. If set to False, SSL layer compression of HTTPS
# Swift requests is disabled. Disabling this option may improve
# performance for images which are already in a compressed format,
# for example, qcow2.
#
# Possible values:
#     * True
#     * False
#
# Related Options:
#     * None
#
#  (boolean value)
#swift_store_ssl_compression = true

#
# The number of times a Swift download will be retried before the
# request fails.
#
# Provide an integer value representing the number of times an image
# download must be retried before erroring out. The default value is
# zero (no retry on a failed image download). When set to a positive
# integer value, ``swift_store_retry_get_count`` ensures that the
# download is attempted this many more times upon a download failure
# before sending an error message.
#
# Possible values:
#     * Zero
#     * Positive integer value
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#swift_store_retry_get_count = 0

#
# Time in seconds defining the size of the window in which a new
# token may be requested before the current token is due to expire.
#
# Typically, the Swift storage driver fetches a new token upon the
# expiration of the current token to ensure continued access to
# Swift. However, some Swift transactions (like uploading image
# segments) may not recover well if the token expires on the fly.
#
# Hence, by fetching a new token before the current token expiration,
# we make sure that the token does not expire or is close to expiry
# before a transaction is attempted. By default, the Swift storage
# driver requests for a new token 60 seconds or less before the
# current token expiration.
#
# Possible values:
#     * Zero
#     * Positive integer value
#
# Related Options:
#     * None
#
#  (integer value)
# Minimum value: 0
#swift_store_expire_soon_interval = 60

#
# Use trusts for multi-tenant Swift store.
#
# This option instructs the Swift store to create a trust for each
# add/get request when the multi-tenant store is in use. Using trusts
# allows the Swift store to avoid problems that can be caused by an
# authentication token expiring during the upload or download of data.
#
# By default, ``swift_store_use_trusts`` is set to ``True``(use of
# trusts is enabled). If set to ``False``, a user token is used for
# the Swift connection instead, eliminating the overhead of trust
# creation.
#
# NOTE: This option is considered only when
# ``swift_store_multi_tenant`` is set to ``True``
#
# Possible values:
#     * True
#     * False
#
# Related options:
#     * swift_store_multi_tenant
#
#  (boolean value)
#swift_store_use_trusts = true

#
# Reference to default Swift account/backing store parameters.
#
# Provide a string value representing a reference to the default set
# of parameters required for using swift account/backing store for
# image storage. The default reference value for this configuration
# option is 'ref1'. This configuration option dereferences the
# parameters and facilitates image storage in Swift storage backend
# every time a new image is added.
#
# Possible values:
#     * A valid string value
#
# Related options:
#     * None
#
#  (string value)
#default_swift_reference = ref1

# DEPRECATED: Version of the authentication service to use. Valid versions are 2
# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_version' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_version = 2

# DEPRECATED: The address where the Swift authentication service is listening.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_address' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_address = <None>

# DEPRECATED: The user to authenticate against the Swift authentication service.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'user' in the Swift back-end configuration file is set instead.
#swift_store_user = <None>

# DEPRECATED: Auth key for the user authenticating against the Swift
# authentication service. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'key' in the Swift back-end configuration file is used
# to set the authentication key instead.
#swift_store_key = <None>

#
# Absolute path to the file containing the swift account(s)
# configurations.
#
# Include a string value representing the path to a configuration
# file that has references for each of the configured Swift
# account(s)/backing stores. By default, no file path is specified
# and customized Swift referencing is disabled. Configuring this
# option is highly recommended while using Swift storage backend for
# image storage as it avoids storage of credentials in the database.
#
# Possible values:
#     * String value representing an absolute path on the glance-api
#       node
#
# Related options:
#     * None
#
#  (string value)
#swift_store_config_file = <None>

#
# Address of the ESX/ESXi or vCenter Server target system.
#
# This configuration option sets the address of the ESX/ESXi or vCenter
# Server target system. This option is required when using the VMware
# storage backend. The address can contain an IP address (127.0.0.1) or
# a DNS name (www.my-domain.com).
#
# Possible Values:
#     * A valid IPv4 or IPv6 address
#     * A valid DNS name
#
# Related options:
#     * vmware_server_username
#     * vmware_server_password
#
#  (string value)
#vmware_server_host = 127.0.0.1

#
# Server username.
#
# This configuration option takes the username for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
#     * Any string that is the username for a user with appropriate
#       privileges
#
# Related options:
#     * vmware_server_host
#     * vmware_server_password
#
#  (string value)
#vmware_server_username = root

#
# Server password.
#
# This configuration option takes the password for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
#     * Any string that is a password corresponding to the username
#       specified using the "vmware_server_username" option
#
# Related options:
#     * vmware_server_host
#     * vmware_server_username
#
#  (string value)
#vmware_server_password = vmware

#
# The number of VMware API retries.
#
# This configuration option specifies the number of times the VMware
# ESX/VC server API must be retried upon connection related issues or
# server API call overload. It is not possible to specify 'retry
# forever'.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#vmware_api_retry_count = 10

#
# Interval in seconds used for polling remote tasks invoked on VMware
# ESX/VC server.
#
# This configuration option takes in the sleep time in seconds for polling an
# on-going async task as part of the VMWare ESX/VC server API call.
#
# Possible Values:
#     * Any positive integer value
#
# Related options:
#     * None
#
#  (integer value)
# Minimum value: 1
#vmware_task_poll_interval = 5

#
# The directory where the glance images will be stored in the datastore.
#
# This configuration option specifies the path to the directory where the
# glance images will be stored in the VMware datastore. If this option
# is not set,  the default directory where the glance images are stored
# is openstack_glance.
#
# Possible Values:
#     * Any string that is a valid path to a directory
#
# Related options:
#     * None
#
#  (string value)
#vmware_store_image_dir = /openstack_glance

#
# Set verification of the ESX/vCenter server certificate.
#
# This configuration option takes a boolean value to determine
# whether or not to verify the ESX/vCenter server certificate. If this
# option is set to True, the ESX/vCenter server certificate is not
# verified. If this option is set to False, then the default CA
# truststore is used for verification.
#
# This option is ignored if the "vmware_ca_file" option is set. In that
# case, the ESX/vCenter server certificate will then be verified using
# the file specified using the "vmware_ca_file" option .
#
# Possible Values:
#     * True
#     * False
#
# Related options:
#     * vmware_ca_file
#
#  (boolean value)
# Deprecated group/name - [glance_store]/vmware_api_insecure
#vmware_insecure = false

#
# Absolute path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Cerificate Authority File to verify the ESX/vCenter certificate.
#
# If this option is set, the "vmware_insecure" option will be ignored
# and the CA file specified will be used to authenticate the ESX/vCenter
# server certificate and establish a secure connection to the server.
#
# Possible Values:
#     * Any string that is a valid absolute path to a CA file
#
# Related options:
#     * vmware_insecure
#
#  (string value)
#vmware_ca_file = /etc/ssl/certs/ca-certificates.crt

#
# The datastores where the image can be stored.
#
# This configuration option specifies the datastores where the image can
# be stored in the VMWare store backend. This option may be specified
# multiple times for specifying multiple datastores. The datastore name
# should be specified after its datacenter path, separated by ":". An
# optional weight may be given after the datastore name, separated again
# by ":" to specify the priority. Thus, the required format becomes
# <datacenter_path>:<datastore_name>:<optional_weight>.
#
# When adding an image, the datastore with highest weight will be
# selected, unless there is not enough free space available in cases
# where the image size is already known. If no weight is given, it is
# assumed to be zero and the directory will be considered for selection
# last. If multiple datastores have the same weight, then the one with
# the most free space available is selected.
#
# Possible Values:
#     * Any string of the format:
#       <datacenter_path>:<datastore_name>:<optional_weight>
#
# Related options:
#    * None
#
#  (multi valued)
#vmware_datastores =


[oslo_concurrency]

#
# From oslo.concurrency
#

# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false

# Directory to use for lock files.  For security, the specified directory should
# only be writable by the user running the processes that need locking. Defaults
# to environment variable OSLO_LOCK_PATH. If external locks are used, a lock
# path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
glance-swift.conf
# glance-swift.conf.sample
#
# This file is an example config file when
# multiple swift accounts/backing stores are enabled.
#
# Specify the reference name in []
# For each section, specify the auth_address, user and key.
#
# WARNING:
# * If any of auth_address, user or key is not specified,
# the glance-api's swift store will fail to configure

[ref1]
user = tenant:user1
key = key1
auth_version = 2
auth_address = http://localhost:5000/v2.0

[ref2]
user = project_name:user_name2
key = key2
user_domain_id = default
project_domain_id = default
auth_version = 3
auth_address = http://localhost:5000/v3
ovf-metadata.json

The ovf-metadata.json file specifies the OVF properties of interest for the OVF processing task. Configure this to extract metadata from an OVF and create corresponding properties on an image for the Image service. Currently, the task supports only the extraction of properties from the CIM_ProcessorAllocationSettingData namespace, CIM schema.

{
    "cim_pasd": [
        "ProcessorArchitecture",
        "InstructionSet",
        "InstructionSetExtensionName"
    ]
}
policy.json

The /etc/glance/policy.json file defines additional access controls that apply to the Image service.

{
    "context_is_admin":  "role:admin",
    "default": "role:admin",

    "add_image": "",
    "delete_image": "",
    "get_image": "",
    "get_images": "",
    "modify_image": "",
    "publicize_image": "role:admin",
    "copy_from": "",

    "download_image": "",
    "upload_image": "",

    "delete_image_location": "",
    "get_image_location": "",
    "set_image_location": "",

    "add_member": "",
    "delete_member": "",
    "get_member": "",
    "get_members": "",
    "modify_member": "",

    "manage_image_cache": "role:admin",

    "get_task": "role:admin",
    "get_tasks": "role:admin",
    "add_task": "role:admin",
    "modify_task": "role:admin",

    "deactivate": "",
    "reactivate": "",

    "get_metadef_namespace": "",
    "get_metadef_namespaces":"",
    "modify_metadef_namespace":"",
    "add_metadef_namespace":"",

    "get_metadef_object":"",
    "get_metadef_objects":"",
    "modify_metadef_object":"",
    "add_metadef_object":"",

    "list_metadef_resource_types":"",
    "get_metadef_resource_type":"",
    "add_metadef_resource_type_association":"",

    "get_metadef_property":"",
    "get_metadef_properties":"",
    "modify_metadef_property":"",
    "add_metadef_property":"",

    "get_metadef_tag":"",
    "get_metadef_tags":"",
    "modify_metadef_tag":"",
    "add_metadef_tag":"",
    "add_metadef_tags":""

}
property-protections-policies.conf
# property-protections-policies.conf.sample
#
# This file is an example config file for when
# property_protection_rule_format=policies is enabled.
#
# Specify regular expression for which properties will be protected in []
# For each section, specify CRUD permissions. You may refer to policies defined
# in policy.json.
# The property rules will be applied in the order specified. Once
# a match is found the remaining property rules will not be applied.
#
# WARNING:
# * If the reg ex specified below does not compile, then
# the glance-api service fails to start. (Guide for reg ex python compiler
# used:
# http://docs.python.org/2/library/re.html#regular-expression-syntax)
# * If an operation(create, read, update, delete) is not specified or misspelt
# then the glance-api service fails to start.
# So, remember, with GREAT POWER comes GREAT RESPONSIBILITY!
#
# NOTE: Only one policy can be specified per action. If multiple policies are
# specified, then the glance-api service fails to start.

[^x_.*]
create = default
read = default
update = default
delete = default

[.*]
create = context_is_admin
read = context_is_admin
update = context_is_admin
delete = context_is_admin
property-protections-roles.conf
# property-protections-roles.conf.sample
#
# This file is an example config file for when
# property_protection_rule_format=roles is enabled.
#
# Specify regular expression for which properties will be protected in []
# For each section, specify CRUD permissions.
# The property rules will be applied in the order specified. Once
# a match is found the remaining property rules will not be applied.
#
# WARNING:
# * If the reg ex specified below does not compile, then
# glance-api service will not start. (Guide for reg ex python compiler used:
# http://docs.python.org/2/library/re.html#regular-expression-syntax)
# * If an operation(create, read, update, delete) is not specified or misspelt
# then the glance-api service will not start.
# So, remember, with GREAT POWER comes GREAT RESPONSIBILITY!
#
# NOTE: Multiple roles can be specified for a given operation. These roles must
# be comma separated.

[^x_.*]
create = admin,member,_member_
read = admin,member,_member_
update = admin,member,_member_
delete = admin,member,_member_

[.*]
create = admin
read = admin
update = admin
delete = admin

New, updated, and deprecated options in Newton for Image service

New options
Option = default value (Type) Help string
[DEFAULT] secure_proxy_ssl_header = None (StrOpt) The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is “HTTP_X_FORWARDED_PROTO”.
[profiler] connection_string = messaging:// (StrOpt) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: * messaging://: use oslo_messaging driver for sending notifications.
New default values
Option Previous default value New default value
[DEFAULT] ca_file None /etc/ssl/cafile
[DEFAULT] cert_file None /etc/ssl/certs
[DEFAULT] key_file None /etc/ssl/key/key-file.pem
[DEFAULT] pydev_worker_debug_host None localhost
[DEFAULT] registry_client_ca_file None /etc/ssl/cafile/file.ca
[DEFAULT] registry_client_cert_file None /etc/ssl/certs/file.crt
[DEFAULT] registry_client_key_file None /etc/ssl/key/key-file.pem
[image_format] disk_formats ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, iso ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso
[paste_deploy] config_file None glance-api-paste.ini
[paste_deploy] flavor None keystone
[task] work_dir None /work_dir
[taskflow_executor] conversion_format None raw
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None

Compute relies on an external image service to store virtual machine images and maintain a catalog of available images. By default, Compute is configured to use the Image service (glance), which is currently the only supported image service.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Message service

Overview of zaqar.conf

The zaqar.conf configuration file is an INI file format as explained in Configuration file format.

This file is located in /etc/zaqar. If there is a file zaqar.conf in ~/.zaqar directory, it is used instead of the one in /etc/zaqar directory. When you manually install the Message service, you must generate the zaqar.conf file using the config samples generator located inside Zaqar installation directory and customize it according to your preferences.

To generate the sample configuration file zaqar/etc/zaqar.conf.sample:

# pip install tox
$ cd zaqar
$ tox -e genconfig

Where zaqar is your Message service installation directory.

Then copy Message service configuration sample to the directory /etc/zaqar:

# cp etc/zaqar.conf.sample /etc/zaqar/zaqar.conf

For a list of configuration options, see the tables in this guide.

Important

Do not specify quotes around configuration options.

Sections

Configuration options are grouped by section. Message service configuration file supports the following sections:

[DEFAULT]
Contains most configuration options. If the documentation for a configuration option does not specify its section, assume that it appears in this section.
[cache]
Configures caching.
[drivers]
Select drivers.
[transport]
Configures general transport options.
[drivers:transport:wsgi]
Configures the WSGI transport driver.
[drivers:transport:websocket]
Configures the Websocket transport driver.
[storage]
Configures general storage options.
[drivers:management_store:mongodb]
Configures the MongoDB management storage driver.
[drivers:message_store:mongodb]
Configures the MongoDB message storage driver.
[drivers:management_store:redis]
Configures the Redis management storage driver.
[drivers:message_store:redis]
Configures the Redis message storage driver.
[drivers:management_store:sqlalchemy]
Configures the SQLalchemy management storage driver.
[keystone_authtoken]
Configures the Identity service endpoint.
[oslo_policy]
Configures the RBAC policy.
[pooling:catalog]
Configures the pooling catalog.
[signed_url]
Configures signed URLs.

Message API configuration

The Message service has two APIs: the HTTP REST API for WSGI transport driver, and the Websocket API for Websocket transport driver. The Message service can use only one transport driver at the same time. See Drivers options for driver options.

The functionality and behavior of the APIs are defined by API versions. For example, the Websocket API v2 acts the same as the HTTP REST API v2. For now there are v1, v1.1 and v2 versions of HTTP REST API and only v2 version of Websocket API.

Permission control options in each API version:

  • The v1 does not have any permission options.
  • The v1.1 has only admin_mode option which controls the global permission to access the pools and flavors functionality.
  • The v2 has only:
    • RBAC policy options: policy_default_rule, policy_dirs, policy_file which controls the permissions to access each type of functionality for different types of users. See The policy.json file.
    • secret_key option which defines a secret key to use for signing special URLs. These are called pre-signed URLs and give temporary permissions to outsiders of the system.
Configuration options

The Message service can be configured by changing the following options:

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_mode = False (Boolean) Activate privileged endpoints.
enable_deprecated_api_versions = (List) List of deprecated API versions to enable.
unreliable = False (Boolean) Disable all reliability constraints.
[notification]  
max_notifier_workers = 10 (Integer) The max amount of the notification workers.
require_confirmation = False (Boolean) Whether the http/https/email subscription need to be confirmed before notification.
smtp_command = /usr/sbin/sendmail -t -oi (String) The command of smtp to send email. The format is “command_name arg1 arg2”.
[signed_url]  
secret_key = None (String) Secret key used to encrypt pre-signed URLs.

Drivers options

The transport and storage drivers used by the Message service are determined by the following options:

Description of drivers configuration options
Configuration option = Default value Description
[drivers]  
management_store = mongodb (String) Storage driver to use as the management store.
message_store = mongodb (String) Storage driver to use as the messaging store.
transport = wsgi (String) Transport driver to use.

Storage drivers options

Storage back ends

The Message service supports several different storage back ends (storage drivers) for storing management information, messages and their metadata. The recommended storage back end is MongoDB. For information on how to specify the storage back ends, see Drivers options.

When the storage back end is chosen, the corresponding back-end options become active. For example, if Redis is chosen as the management storage back end, the options in [drivers:management_store:redis] section become active.

Storage layer pipelines

A pipeline is a set of stages needed to process a request. When a new request comes to the Message service, first it goes through the transport layer pipeline and then through one of the storage layer pipelines depending on the type of operation of each particular request. For example, if the Message service receives a request to make a queue-related operation, the storage layer pipeline will be queue pipeline. The Message service always has the actual storage controller as the final storage layer pipeline stage.

By setting the options in the [storage] section of zaqar.conf, you can add additional stages to these storage layer pipelines:

  • Claim pipeline
  • Message pipeline with built-in stage available to use:
    • zaqar.notification.notifier - sends notifications to the queue subscribers on each incoming message to the queue, in other words, enables notifications functionality.
  • Queue pipeline
  • Subscription pipeline

The storage layer pipelines options are empty by default, because additional stages can affect the performance of the Message service. Depending on the stages, the sequence in which the option values are listed does matter or not.

You can add external stages to the storage layer pipelines. For information how to write and add your own external stages, see Writing stages for the storage pipelines tutorial.

Options

The following tables detail the available options:

Description of storage configuration options
Configuration option = Default value Description
[storage]  
claim_pipeline = (List) Pipeline to use for processing claim operations. This pipeline will be consumed before calling the storage driver’s controller methods.
message_pipeline = (List) Pipeline to use for processing message operations. This pipeline will be consumed before calling the storage driver’s controller methods.
queue_pipeline = (List) Pipeline to use for processing queue operations. This pipeline will be consumed before calling the storage driver’s controller methods.
subscription_pipeline = (List) Pipeline to use for processing subscription operations. This pipeline will be consumed before calling the storage driver’s controller methods.
Description of MongoDB configuration options
Configuration option = Default value Description
[drivers:management_store:mongodb]  
database = zaqar (String) Database name.
max_attempts = 1000 (Integer) Maximum number of times to retry a failed operation. Currently only used for retrying a message post.
max_reconnect_attempts = 10 (Integer) Maximum number of times to retry an operation that failed due to a primary node failover.
max_retry_jitter = 0.005 (Floating point) Maximum jitter interval, to be added to the sleep interval, in order to decrease probability that parallel requests will retry at the same instant.
max_retry_sleep = 0.1 (Floating point) Maximum sleep interval between retries (actual sleep time increases linearly according to number of attempts performed).
reconnect_sleep = 0.02 (Floating point) Base sleep interval between attempts to reconnect after a primary node failover. The actual sleep time increases exponentially (power of 2) each time the operation is retried.
ssl_ca_certs = None (String) The ca_certs file contains a set of concatenated “certification authority” certificates, which are used to validate certificates passed from the other end of the connection.
ssl_cert_reqs = CERT_REQUIRED (String) Specifies whether a certificate is required from the other side of the connection, and whether it will be validated if provided. It must be one of the three values CERT_NONE``(certificates ignored), ``CERT_OPTIONAL``(not required, but validated if provided), or ``CERT_REQUIRED``(required and validated). If the value of this parameter is not ``CERT_NONE, then the ssl_ca_cert parameter must point to a file of CA certificates.
ssl_certfile = None (String) The certificate file used to identify the local connection against mongod.
ssl_keyfile = None (String) The private keyfile used to identify the local connection against mongod. If included with the certifle then only the ssl_certfile is needed.
uri = None (String) Mongodb Connection URI. If ssl connection enabled, then ssl_keyfile, ssl_certfile, ssl_cert_reqs, ssl_ca_certs need to be set accordingly.
[drivers:message_store:mongodb]  
database = zaqar (String) Database name.
max_attempts = 1000 (Integer) Maximum number of times to retry a failed operation. Currently only used for retrying a message post.
max_reconnect_attempts = 10 (Integer) Maximum number of times to retry an operation that failed due to a primary node failover.
max_retry_jitter = 0.005 (Floating point) Maximum jitter interval, to be added to the sleep interval, in order to decrease probability that parallel requests will retry at the same instant.
max_retry_sleep = 0.1 (Floating point) Maximum sleep interval between retries (actual sleep time increases linearly according to number of attempts performed).
partitions = 2 (Integer) Number of databases across which to partition message data, in order to reduce writer lock %. DO NOT change this setting after initial deployment. It MUST remain static. Also, you should not need a large number of partitions to improve performance, esp. if deploying MongoDB on SSD storage.
reconnect_sleep = 0.02 (Floating point) Base sleep interval between attempts to reconnect after a primary node failover. The actual sleep time increases exponentially (power of 2) each time the operation is retried.
ssl_ca_certs = None (String) The ca_certs file contains a set of concatenated “certification authority” certificates, which are used to validate certificates passed from the other end of the connection.
ssl_cert_reqs = CERT_REQUIRED (String) Specifies whether a certificate is required from the other side of the connection, and whether it will be validated if provided. It must be one of the three values CERT_NONE``(certificates ignored), ``CERT_OPTIONAL``(not required, but validated if provided), or ``CERT_REQUIRED``(required and validated). If the value of this parameter is not ``CERT_NONE, then the ssl_ca_cert parameter must point to a file of CA certificates.
ssl_certfile = None (String) The certificate file used to identify the local connection against mongod.
ssl_keyfile = None (String) The private keyfile used to identify the local connection against mongod. If included with the certifle then only the ssl_certfile is needed.
uri = None (String) Mongodb Connection URI. If ssl connection enabled, then ssl_keyfile, ssl_certfile, ssl_cert_reqs, ssl_ca_certs need to be set accordingly.
Description of Redis configuration options
Configuration option = Default value Description
[drivers:management_store:redis]  
max_reconnect_attempts = 10 (Integer) Maximum number of times to retry an operation that failed due to a redis node failover.
reconnect_sleep = 1.0 (Floating point) Base sleep interval between attempts to reconnect after a redis node failover.
uri = redis://127.0.0.1:6379 (String) Redis connection URI, taking one of three forms. For a direct connection to a Redis server, use the form “redis://host[:port][?options]”, where port defaults to 6379 if not specified. For an HA master-slave Redis cluster using Redis Sentinel, use the form “redis://host1[:port1][,host2[:port2],...,hostN[:portN]][?options]”, where each host specified corresponds to an instance of redis-sentinel. In this form, the name of the Redis master used in the Sentinel configuration must be included in the query string as “master=<name>”. Finally, to connect to a local instance of Redis over a unix socket, you may use the form “redis:/path/to/redis.sock[?options]”. In all forms, the “socket_timeout” option may be specified in the query string. Its value is given in seconds. If not provided, “socket_timeout” defaults to 0.1 seconds.
[drivers:message_store:redis]  
max_reconnect_attempts = 10 (Integer) Maximum number of times to retry an operation that failed due to a redis node failover.
reconnect_sleep = 1.0 (Floating point) Base sleep interval between attempts to reconnect after a redis node failover.
uri = redis://127.0.0.1:6379 (String) Redis connection URI, taking one of three forms. For a direct connection to a Redis server, use the form “redis://host[:port][?options]”, where port defaults to 6379 if not specified. For an HA master-slave Redis cluster using Redis Sentinel, use the form “redis://host1[:port1][,host2[:port2],...,hostN[:portN]][?options]”, where each host specified corresponds to an instance of redis-sentinel. In this form, the name of the Redis master used in the Sentinel configuration must be included in the query string as “master=<name>”. Finally, to connect to a local instance of Redis over a unix socket, you may use the form “redis:/path/to/redis.sock[?options]”. In all forms, the “socket_timeout” option may be specified in the query string. Its value is given in seconds. If not provided, “socket_timeout” defaults to 0.1 seconds.
Description of SQLAlchemy configuration options
Configuration option = Default value Description
[drivers:management_store:sqlalchemy]  
uri = sqlite:///:memory: (String) An sqlalchemy URL

Transport drivers options

The Message service uses WSGI as the default transport mechanism. The following tables detail the available options:

Description of transport configuration options
Configuration option = Default value Description
[transport]  
default_claim_grace = 60 (Integer) Defines the message grace period in seconds.
default_claim_ttl = 300 (Integer) Defines how long a message will be in claimed state.
default_message_ttl = 3600 (Integer) Defines how long a message will be accessible.
default_subscription_ttl = 3600 (Integer) Defines how long a subscription will be available.
max_claim_grace = 43200 (Integer) Defines the maximum message grace period in seconds.
max_claim_ttl = 43200 (Integer) Maximum length of a message in claimed state.
max_message_ttl = 1209600 (Integer) Maximum amount of time a message will be available.
max_messages_per_claim_or_pop = 20 (Integer) The maximum number of messages that can be claimed (OR) popped in a single request
max_messages_per_page = 20 (Integer) Defines the maximum number of messages per page.
max_messages_post_size = 262144 (Integer) Defines the maximum size of message posts.
max_queue_metadata = 65536 (Integer) Defines the maximum amount of metadata in a queue.
max_queues_per_page = 20 (Integer) Defines the maximum number of queues per page.
max_subscriptions_per_page = 20 (Integer) Defines the maximum number of subscriptions per page.
subscriber_types = http, https, mailto, trust+http, trust+https (List) Defines supported subscriber types.
Description of WSGI configuration options
Configuration option = Default value Description
[drivers:transport:wsgi]  
bind = 127.0.0.1 (IP) Address on which the self-hosting server will listen.
port = 8888 (Port number) Port on which the self-hosting server will listen.
Description of Websocket configuration options
Configuration option = Default value Description
[drivers:transport:websocket]  
bind = 127.0.0.1 (IP) Address on which the self-hosting server will listen.
external_port = None (Port number) Port on which the service is provided to the user.
port = 9000 (Port number) Port on which the self-hosting server will listen.

Notifications options

The notifications feature in the Message service can be enabled by adding zaqar.notification.notifier stage to the message storage layer pipeline. To do it, ensure that zaqar.notification.notifier is added to message_pipeline option in the [storage] section of zaqar.conf:

[storage]
message_pipeline = zaqar.notification.notifier

For more information about storage layer pipelines, see Storage drivers options.

Authentication and authorization

All requests to the API may only be performed by an authenticated agent.

The preferred authentication system is the OpenStack Identity service, code-named keystone.

Identity service authentication

To authenticate, an agent issues an authentication request to an Identity service endpoint. In response to valid credentials, Identity service responds with an authentication token and a service catalog that contains a list of all services and endpoints available for the given token.

Multiple endpoints may be returned for Message service according to physical locations and performance/availability characteristics of different deployments.

Normally, Identity service middleware provides the X-Project-Id header based on the authentication token submitted by the Message service client.

For this to work, clients must specify a valid authentication token in the X-Auth-Token header for each request to the Message service API. The API validates authentication tokens against Identity service before servicing each request.

No authentication

If authentication is not enabled, clients must provide the X-Project-Id header themselves.

Options

Configure the authentication and authorization strategy through these options:

Description of authentication configuration options
Configuration option = Default value Description
[DEFAULT]  
auth_strategy = (String) Backend to use for authentication. For no auth, keep it empty. Existing strategies: keystone. See also the keystone_authtoken section below
Description of trustee configuration options
Configuration option = Default value Description
[trustee]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
auth_url = None (Unknown) Authentication URL
default_domain_id = None (Unknown) Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
default_domain_name = None (Unknown) Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
domain_id = None (Unknown) Domain ID to scope to
domain_name = None (Unknown) Domain name to scope to
password = None (Unknown) User’s password
project_domain_id = None (Unknown) Domain ID containing project
project_domain_name = None (Unknown) Domain name containing project
project_id = None (Unknown) Project ID to scope to
project_name = None (Unknown) Project name to scope to
trust_id = None (Unknown) Trust ID
user_domain_id = None (Unknown) User’s domain id
user_domain_name = None (Unknown) User’s domain name
user_id = None (Unknown) User id
username = None (Unknown) Username

Pooling options

The Message service supports pooling.

Pooling aims to make the Message service highly scalable without losing any of its flexibility by allowing users to use multiple back ends.

You can enable and configure pooling with the following options:

Description of pooling configuration options
Configuration option = Default value Description
[DEFAULT]  
pooling = False (Boolean) Enable pooling across multiple storage backends. If pooling is enabled, the storage driver configuration is used to determine where the catalogue/control plane data is kept.
[pooling:catalog]  
enable_virtual_pool = False (Boolean) If enabled, the message_store will be used as the storage for the virtual pool.

Messaging log files

The corresponding log file of each Messaging service is stored in the /var/log/zaqar/ directory of the host on which each service runs.

Log files used by Messaging services
Log filename Service that logs to the file
server.log Messaging service

New, updated, and deprecated options in Newton for Message service

New options
Option = default value (Type) Help string
[DEFAULT] enable_deprecated_api_versions = (ListOpt) List of deprecated API versions to enable.
[notification] max_notifier_workers = 10 (IntOpt) The max amount of the notification workers.
[notification] require_confirmation = False (BoolOpt) Whether the http/https/email subscription need to be confirmed before notification.
[trustee] auth_section = None (Opt) Config Section from which to load plugin specific options
[trustee] auth_type = None (Opt) Authentication type to load
[trustee] auth_url = None (Opt) Authentication URL
[trustee] default_domain_id = None (Opt) Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
[trustee] default_domain_name = None (Opt) Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication.
[trustee] domain_id = None (Opt) Domain ID to scope to
[trustee] domain_name = None (Opt) Domain name to scope to
[trustee] password = None (Opt) User’s password
[trustee] project_domain_id = None (Opt) Domain ID containing project
[trustee] project_domain_name = None (Opt) Domain name containing project
[trustee] project_id = None (Opt) Project ID to scope to
[trustee] project_name = None (Opt) Project name to scope to
[trustee] trust_id = None (Opt) Trust ID
[trustee] user_domain_id = None (Opt) User’s domain id
[trustee] user_domain_name = None (Opt) User’s domain name
[trustee] user_id = None (Opt) User id
[trustee] username = None (Opt) Username
New default values
Option Previous default value New default value
[transport] subscriber_types http, https, mailto http, https, mailto, trust+http, trust+https
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None

The Message service is multi-tenant, fast, reliable, and scalable. It allows developers to share data between distributed application components performing different tasks, without losing messages or requiring each component to be always available.

The service features a RESTful API, which developers can use to send messages between various components of their SaaS and mobile applications, by using a variety of communication patterns.

Key features

The Message service provides the following key features:

  • Choice between two communication transports. Both with Identity service support:
    • Firewall-friendly, HTTP-based RESTful API. Many of today’s developers prefer a more web-friendly HTTP API. They value the simplicity and transparency of the protocol, its firewall-friendly nature, and its huge ecosystem of tools, load balancers and proxies. In addition, cloud operators appreciate the scalability aspects of the REST architectural style.
    • Websocket-based API for persistent connections. Websocket protocol provides communication over persistent connections. Unlike HTTP, where new connections are opened for each request/response pair, Websocket can transfer multiple requests/responses over single TCP connection. It saves much network traffic and minimizes delays.
  • Multi-tenant queues based on Identity service IDs.
  • Support for several common patterns including event broadcasting, task distribution, and point-to-point messaging.
  • Component-based architecture with support for custom back ends and message filters.
  • Efficient reference implementation with an eye toward low latency and high throughput (dependent on back end).
  • Highly available and horizontally scalable.
  • Support for subscriptions to queues. Several notification types are available:
    • Email notifications.
    • Webhook notifications.
    • Websocket notifications.

Components

The Message service contains the following components:

  • Transport back end. The Message service requires the selection of a transport specification responsible of the communication between the endpoints. In addition to the base driver implementation, the Message service also provides the means to add support for other transport mechanisms. The default option is WSGI.
  • Storage back end. The Message service depends on a storage engine for message persistence. In addition to the base driver implementation, the Message service also provides the means to add support for other storage solutions. The default storage option is MongoDB.

To configure your Message service installation, you must define configuration options in these files:

  • zaqar.conf. Contains most of the Message service configuration options. Resides in the /etc/zaqar directory. If there is a file zaqar.conf in ~/.zaqar directory, it is used instead of the one in /etc/zaqar directory.
  • policy.json. Contains RBAC policy for all actions. Only applies to API v2. Resides in the /etc/zaqar directory. If there is a file policy.json in ~/.zaqar directory, it is used instead of the one in /etc/zaqar directory. See The policy.json file.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Networking service

Networking configuration options

The options and descriptions listed in this introduction are auto generated from the code in the Networking service project, which provides software-defined networking between VMs run in Compute. The list contains common options, while the subsections list the options for the various networking plug-ins.

Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
agent_down_time = 75 (Integer) Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent is down for good.
allow_automatic_dhcp_failover = True (Boolean) Automatically remove networks from offline DHCP agents.
allow_automatic_l3agent_failover = False (Boolean) Automatically reschedule routers from offline L3 agents to online L3 agents.
api_workers = None (Integer) Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance.
auth_ca_cert = None (String) Certificate Authority public key (CA cert) file for ssl
auth_strategy = keystone (String) The type of authentication to use
base_mac = fa:16:3e:00:00:00 (String) The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated.
bind_host = 0.0.0.0 (String) The host IP to bind to
bind_port = 9696 (Port number) The port to bind to
cache_url = (String) DEPRECATED: URL to connect to the cache back end. This option is deprecated in the Newton release and will be removed. Please add a [cache] group for oslo.cache in your neutron.conf and add “enable” and “backend” options in this section.
core_plugin = None (String) The core plugin Neutron will use
default_availability_zones = (List) Default value of availability zone hints. The availability zone aware schedulers use this when the resources availability_zone_hints is empty. Multiple availability zones can be specified by a comma separated string. This value can be empty. In this case, even if availability_zone_hints for a resource is empty, availability zone is considered for high availability while scheduling the resource.
dhcp_agent_notification = True (Boolean) Allow sending resource operation notification to DHCP agent
dhcp_agents_per_network = 1 (Integer) Number of DHCP agents scheduled to host a tenant network. If this number is greater than 1, the scheduler automatically assigns multiple DHCP agents for a given tenant network, providing high availability for DHCP service.
dhcp_broadcast_reply = False (Boolean) Use broadcast in DHCP replies.
dhcp_confs = $state_path/dhcp (String) Location to store DHCP server config files.
dhcp_domain = openstacklocal (String) DEPRECATED: Domain to use for building the hostnames. This option is deprecated. It has been moved to neutron.conf as dns_domain. It will be removed in a future release.
dhcp_lease_duration = 86400 (Integer) DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite lease times.
dhcp_load_type = networks (String) Representing the resource type whose load is being reported by the agent. This can be “networks”, “subnets” or “ports”. When specified (Default is networks), the server will extract particular load sent as part of its agent configuration object from the agent report state, which is the number of resources being consumed, at every report_interval.dhcp_load_type can be used in combination with network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler When the network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured to represent the choice for the resource being balanced. Example: dhcp_load_type=networks
dns_domain = openstacklocal (String) Domain to use for building the hostnames
enable_new_agents = True (Boolean) Agent starts with admin_state_up=False when enable_new_agents=False. In the case, user’s resources will not be scheduled automatically to the agent until admin changes admin_state_up to True.
enable_services_on_agents_with_admin_state_down = False (Boolean) Enable services on an agent with admin_state_up False. If this option is False, when admin_state_up of an agent is turned False, services on it will be disabled. Agents with admin_state_up False are not selected for automatic scheduling regardless of this option. But manual scheduling to such agents is available if this option is True.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
external_dns_driver = None (String) Driver for external DNS integration.
global_physnet_mtu = 1500 (Integer) MTU of the underlying physical network. Neutron uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, neutron uses this value without modification. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol overhead from this value. Defaults to 1500, the standard value for Ethernet.
ip_lib_force_root = False (Boolean) Force ip_lib calls to use the root helper
ipam_driver = internal (String) Neutron IPAM (IP address management) driver to use. By default, the reference implementation of the Neutron IPAM driver is used.
mac_generation_retries = 16 (Integer) DEPRECATED: How many times Neutron will retry MAC generation. This option is now obsolete and so is deprecated to be removed in the Ocata release.
max_allowed_address_pair = 10 (Integer) Maximum number of allowed address pairs
max_dns_nameservers = 5 (Integer) Maximum number of DNS nameservers per subnet
max_fixed_ips_per_port = 5 (Integer) DEPRECATED: Maximum number of fixed ips per port. This option is deprecated and will be removed in the Ocata release.
max_rtr_adv_interval = 100 (Integer) MaxRtrAdvInterval setting for radvd.conf
max_subnet_host_routes = 20 (Integer) Maximum number of host routes per subnet
min_rtr_adv_interval = 30 (Integer) MinRtrAdvInterval setting for radvd.conf
periodic_fuzzy_delay = 5 (Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
periodic_interval = 40 (Integer) Seconds between running periodic tasks.
report_interval = 300 (Integer) Interval between two metering reports
state_path = /var/lib/neutron (String) Where to store Neutron state files. This directory must be writable by the agent.
vlan_transparent = False (Boolean) If True, then allow plugins that support it to create VLAN transparent networks.
web_framework = legacy (String) This will choose the web framework in which to run the Neutron API server. ‘pecan’ is a new experimental rewrite of the API server.
[AGENT]  
check_child_processes_action = respawn (String) Action to be executed when a child process dies
check_child_processes_interval = 60 (Integer) Interval between checks of child process liveness (seconds), use 0 to disable
debug_iptables_rules = False (Boolean) Duplicate every iptables difference calculation to ensure the format being generated matches the format of iptables-save. This option should not be turned on for production systems because it imposes a performance penalty.
log_agent_heartbeats = False (Boolean) Log agent heartbeats
polling_interval = 2 (Integer) The number of seconds the agent will wait between polling for local device changes.
root_helper = sudo (String) Root helper application. Use ‘sudo neutron-rootwrap /etc/neutron/rootwrap.conf’ to use the real root filter facility. Change to ‘sudo’ to skip the filtering and just run the command directly.
root_helper_daemon = None (String) Root helper daemon application to use when possible.
[profiler]  
connection_string = messaging://

(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging.

Examples of possible values:

  • messaging://: use oslo_messaging driver for sending notifications.
enabled = False

(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature).

Possible values:

  • True: Enables the feature
  • False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.
hmac_keys = SECRET_KEY

(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.

Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.

trace_sqlalchemy = False

(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced).

Possible values:

  • True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.
  • False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.
[qos]  
notification_drivers = message_queue (List) Drivers list to use to send the update notification
[service_providers]  
service_provider = [] (Multi-valued) Defines providers for advanced services using the format: <service_type>:<name>:<driver>[:default]
Networking plug-ins

OpenStack Networking introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some OpenStack Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow. These sections detail the configuration options for the various plug-ins.

Modular Layer 2 (ml2) plug-in configuration options

The Modular Layer 2 (ml2) plug-in has two components: network types and mechanisms. You can configure these components separately. The ml2 plugin also allows administrators to perform a partial specification, where some options are specified explicitly in the configuration, and the remainder is allowed to be chosen automatically by the Compute service.

This section describes the available configuration options.

Note

OpenFlow Agent (ofagent) Mechanism driver has been removed as of Newton.

Description of ML2 configuration options
Configuration option = Default value Description
[ml2]  
extension_drivers = (List) An ordered list of extension driver entrypoints to be loaded from the neutron.ml2.extension_drivers namespace. For example: extension_drivers = port_security,qos
external_network_type = None (String) Default network type for external networks when no provider attributes are specified. By default it is None, which means that if provider attributes are not specified while creating external networks then they will have the same type as tenant networks. Allowed values for external_network_type config option depend on the network type values configured in type_drivers config option.
mechanism_drivers = (List) An ordered list of networking mechanism driver entrypoints to be loaded from the neutron.ml2.mechanism_drivers namespace.
overlay_ip_version = 4 (Integer) IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6.
path_mtu = 0 (Integer) Maximum size of an IP packet (MTU) that can traverse the underlying physical network infrastructure without fragmentation when using an overlay/tunnel protocol. This option allows specifying a physical network MTU value that differs from the default global_physnet_mtu value.
physical_network_mtus = (List) A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val>. This mapping allows specifying a physical network MTU value that differs from the default global_physnet_mtu value.
tenant_network_types = local (List) Ordered list of network_types to allocate as tenant networks. The default value ‘local’ is useful for single-box testing but provides no connectivity between hosts.
type_drivers = local, flat, vlan, gre, vxlan, geneve (List) List of network type driver entrypoints to be loaded from the neutron.ml2.type_drivers namespace.
Modular Layer 2 (ml2) Flat Type configuration options
Description of ML2 Flat mechanism driver configuration options
Configuration option = Default value Description
[ml2_type_flat]  
flat_networks = * (List) List of physical_network names with which flat networks can be created. Use default ‘*’ to allow flat networks with arbitrary physical_network names. Use an empty list to disable flat networks.
Modular Layer 2 (ml2) Geneve Type configuration options
Description of ML2 Geneve type driver configuration options
Configuration option = Default value Description
[ml2_type_geneve]  
max_header_size = 30 (Integer) Geneve encapsulation header size is dynamic, this value is used to calculate the maximum MTU for the driver. This is the sum of the sizes of the outer ETH + IP + UDP + GENEVE header sizes. The default size for this field is 50, which is the size of the Geneve header without any additional option headers.
vni_ranges = (List) Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of Geneve VNI IDs that are available for tenant network allocation
Modular Layer 2 (ml2) GRE Type configuration options
Description of ML2 GRE configuration options
Configuration option = Default value Description
[ml2_type_gre]  
tunnel_id_ranges = (List) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
Modular Layer 2 (ml2) VLAN Type configuration options
Description of ML2 VLAN configuration options
Configuration option = Default value Description
[ml2_type_vlan]  
network_vlan_ranges = (List) List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks.
Modular Layer 2 (ml2) VXLAN Type configuration options
Description of ML2 VXLN configuration options
Configuration option = Default value Description
[ml2_type_vxlan]  
vni_ranges = (List) Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation
vxlan_group = None (String) Multicast group for VXLAN. When configured, will enable sending all broadcast traffic to this multicast group. When left unconfigured, will disable multicast VXLAN mode.
Modular Layer 2 (ml2) L2 Population Mechanism configuration options
Description of ML2 L2 population configuration options
Configuration option = Default value Description
[l2pop]  
agent_boot_time = 180 (Integer) Delay within which agent is expected to update existing ports whent it restarts
Modular Layer 2 (ml2) SR-IOV Mechanism configuration options
Description of ML2 ML2 SR-IOV driver configuration options
Configuration option = Default value Description
[ml2_sriov]  
supported_pci_vendor_devs = None (List) DEPRECATED: Comma-separated list of supported PCI vendor devices, as defined by vendor_id:product_id according to the PCI ID Repository. Default None accept all PCI vendor devicesDEPRECATED: This option is deprecated in the Newton release and will be removed in the Ocata release. Starting from Ocata the mechanism driver will accept all PCI vendor devices.
Agent

Use the following options to alter agent-related settings.

Description of agent configuration options
Configuration option = Default value Description
[DEFAULT]  
external_pids = $state_path/external/pids (String) Location to store child pid files
[AGENT]  
agent_type = Open vSwitch agent (String) DEPRECATED: Selects the Agent Type reported
availability_zone = nova (String) Availability zone of this node
Layer 2 agent configuration options
Description of L2 agent extension configuration options
Configuration option = Default value Description
[agent]  
extensions = (List) Extensions list to use
Linux Bridge agent configuration options
Description of Linux Bridge agent configuration options
Configuration option = Default value Description
[AGENT]  
prevent_arp_spoofing = True (Boolean) DEPRECATED: Enable suppression of ARP responses that don’t match an IP address that belongs to the port from which they originate. Note: This prevents the VMs attached to this agent from spoofing, it doesn’t protect them from other devices which have the capability to spoof (e.g. bare metal or VMs attached to agents without this flag set to True). Spoofing rules will not be added to any ports that have port security disabled. For LinuxBridge, this requires ebtables. For OVS, it requires a version that supports matching ARP headers. This option will be removed in Ocata so the only way to disable protection will be via the port security extension.
quitting_rpc_timeout = 10 (Integer) Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won’t be changed
[LINUX_BRIDGE]  
bridge_mappings = (List) List of <physical_network>:<physical_bridge>
physical_interface_mappings = (List) Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical network names to the agent’s node-specific physical network interfaces to be used for flat and VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent.
[VXLAN]  
arp_responder = False (Boolean) Enable local ARP responder which provides local responses instead of performing ARP broadcast into the overlay. Enabling local ARP responder is not fully compatible with the allowed-address-pairs extension.
enable_vxlan = True (Boolean) Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin using linuxbridge mechanism driver
l2_population = False (Boolean) Extension to use alongside ml2 plugin’s l2population mechanism driver. It enables the plugin to populate VXLAN forwarding table.
local_ip = None (IP) IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the ‘overlay_ip_version’ option in the ML2 plug-in configuration file on the neutron server node(s).
tos = None (Integer) TOS for vxlan interface protocol packets.
ttl = None (Integer) TTL for vxlan interface protocol packets.
vxlan_group = 224.0.0.1 (String) Multicast group(s) for vxlan interface. A range of group addresses may be specified by using CIDR notation. Specifying a range allows different VNIs to use different group addresses, reducing or eliminating spurious broadcast traffic to the tunnel endpoints. To reserve a unique group for each possible (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on all the agents.
Open vSwitch agent configuration options
Description of Open vSwitch agent configuration options
Configuration option = Default value Description
[DEFAULT]  
ovs_integration_bridge = br-int (String) Name of Open vSwitch bridge to use
ovs_use_veth = False (Boolean) Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.
ovs_vsctl_timeout = 10 (Integer) Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs commands will fail with ALARMCLOCK error.
[AGENT]  
arp_responder = False (Boolean) Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2 l2population driver. Allows the switch (when supporting an overlay) to respond to an ARP request locally without performing a costly ARP broadcast into the overlay.
dont_fragment = True (Boolean) Set or un-set the don’t fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel.
drop_flows_on_start = False (Boolean) Reset flow table on start. Setting this to True will cause brief traffic interruption.
enable_distributed_routing = False (Boolean) Make the l2 agent run in DVR mode.
l2_population = False (Boolean) Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scalability.
minimize_polling = True (Boolean) Minimize polling by monitoring ovsdb for interface changes.
ovsdb_monitor_respawn_interval = 30 (Integer) The number of seconds to wait before respawning the ovsdb monitor after losing communication with it.
prevent_arp_spoofing = True (Boolean) DEPRECATED: Enable suppression of ARP responses that don’t match an IP address that belongs to the port from which they originate. Note: This prevents the VMs attached to this agent from spoofing, it doesn’t protect them from other devices which have the capability to spoof (e.g. bare metal or VMs attached to agents without this flag set to True). Spoofing rules will not be added to any ports that have port security disabled. For LinuxBridge, this requires ebtables. For OVS, it requires a version that supports matching ARP headers. This option will be removed in Ocata so the only way to disable protection will be via the port security extension.
quitting_rpc_timeout = 10 (Integer) Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won’t be changed
tunnel_csum = False (Boolean) Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel.
tunnel_types = (List) Network types supported by the agent (gre and/or vxlan).
veth_mtu = 9000 (Integer) MTU size of veth interfaces
vxlan_udp_port = 4789 (Port number) The UDP port to use for VXLAN tunnels.
[OVS]  
bridge_mappings = (List) Comma-separated list of <physical_network>:<bridge> tuples mapping physical network names to the agent’s node-specific Open vSwitch bridge names to be used for flat and VLAN networks. The length of bridge names should be no more than 11. Each bridge must exist, and should have a physical network interface configured as a port. All physical networks configured on the server should have mappings to appropriate bridges on each agent. Note: If you remove a bridge from this mapping, make sure to disconnect it from the integration bridge as it won’t be managed by the agent anymore.
datapath_type = system (String) OVS datapath to use. ‘system’ is the default value and corresponds to the kernel datapath. To enable the userspace datapath set this value to ‘netdev’.
int_peer_patch_port = patch-tun (String) Peer patch port in integration bridge for tunnel bridge.
integration_bridge = br-int (String) Integration bridge to use. Do not change this parameter unless you have a good reason to. This is the name of the OVS integration bridge. There is one per hypervisor. The integration bridge acts as a virtual ‘patch bay’. All VM VIFs are attached to this bridge and then ‘patched’ according to their network connectivity.
local_ip = None (IP) IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the ‘overlay_ip_version’ option in the ML2 plug-in configuration file on the neutron server node(s).
of_connect_timeout = 30 (Integer) Timeout in seconds to wait for the local switch connecting the controller. Used only for ‘native’ driver.
of_interface = native (String) OpenFlow interface to use.
of_listen_address = 127.0.0.1 (IP) Address to listen on for OpenFlow connections. Used only for ‘native’ driver.
of_listen_port = 6633 (Port number) Port to listen on for OpenFlow connections. Used only for ‘native’ driver.
of_request_timeout = 10 (Integer) Timeout in seconds to wait for a single OpenFlow request. Used only for ‘native’ driver.
ovsdb_connection = tcp:127.0.0.1:6640 (String) The connection string for the native OVSDB backend. Requires the native ovsdb_interface to be enabled.
ovsdb_interface = native (String) The interface for interacting with the OVSDB
tun_peer_patch_port = patch-int (String) Peer patch port in tunnel bridge for integration bridge.
tunnel_bridge = br-tun (String) Tunnel bridge to use.
use_veth_interconnection = False (Boolean) Use veths instead of patch ports to interconnect the integration bridge to physical networks. Support kernel without Open vSwitch patch port support so long as it is set to True.
vhostuser_socket_dir = /var/run/openvswitch (String) OVS vhost-user socket directory.
SR-IOV agent configuration options
Description of SR-IOV agent configuration options
Configuration option = Default value Description
[SRIOV_NIC]  
exclude_devices = (List) Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping network_device to the agent’s node-specific list of virtual functions that should not be used for virtual networking. vfs_to_exclude is a semicolon-separated list of virtual functions to exclude from network_device. The network_device in the mapping should appear in the physical_device_mappings list.
physical_device_mappings = (List) Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent’s node-specific physical network device interfaces of SR-IOV physical function to be used for VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent.
MacVTap Agent configuration options
Description of MacVTap agent configuration options
Configuration option = Default value Description
[AGENT]  
quitting_rpc_timeout = 10 (Integer) Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won’t be changed
[macvtap]  
physical_interface_mappings = (List) Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical network names to the agent’s node-specific physical network interfaces to be used for flat and VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent.
IPv6 Prefix Delegation configuration options
Description of IPv6 Prefix Delegation driver configuration options
Configuration option = Default value Description
[DEFAULT]  
pd_confs = $state_path/pd (String) Location to store IPv6 PD files.
pd_dhcp_driver = dibbler (String) Service to handle DHCPv6 Prefix delegation.
vendor_pen = 8888 (String) A decimal value as Vendor’s Registered Private Enterprise Number as required by RFC3315 DUID-EN.
API

Use the following options to alter API-related settings.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
allow_bulk = True (Boolean) Allow the usage of the bulk API
allow_pagination = True (Boolean) DEPRECATED: Allow the usage of the pagination. This option has been deprecated and will now be enabled unconditionally.
allow_sorting = True (Boolean) DEPRECATED: Allow the usage of the sorting. This option has been deprecated and will now be enabled unconditionally.
api_extensions_path = (String) The path for API extensions. Note that this can be a colon-separated list of paths. For example: api_extensions_path = extensions:/path/to/more/exts:/even/more/exts. The __path__ of neutron.extensions is appended to this, so if your extensions are in there you don’t need to specify them here.
api_paste_config = api-paste.ini (String) File name for the paste.deploy config for api service
backlog = 4096 (Integer) Number of backlog requests to configure the socket with
client_socket_timeout = 900 (Integer) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ means wait forever.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs).
pagination_max_limit = -1 (String) The maximum number of items returned in a single response, value was ‘infinite’ or negative integer means no limit
retry_until_window = 30 (Integer) Number of seconds to keep retrying to listen
service_plugins = (List) The service plugins Neutron will use
tcp_keepidle = 600 (Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
use_ssl = False (Boolean) Enable SSL on the API server
wsgi_default_pool_size = 100 (Integer) Size of the pool of greenthreads used by wsgi
wsgi_keep_alive = True (Boolean) If False, closes the client socket connection explicitly.
wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f (String) A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_versionedobjects]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal
Compute

Use the following options to alter Compute-related settings.

Description of Compute configuration options
Configuration option = Default value Description
[DEFAULT]  
notify_nova_on_port_data_changes = True (Boolean) Send notification to nova when port data (fixed_ips/floatingip) changes so nova can update its cache.
notify_nova_on_port_status_changes = True (Boolean) Send notification to nova when port status changes
nova_client_cert = (String) Client certificate for nova metadata api server.
nova_client_priv_key = (String) Private key of client certificate.
send_events_interval = 2 (Integer) Number of seconds between sending events to nova if there are any events to send.
DHCP agent

Use the following options to alter Database-related settings.

Description of DHCP agent configuration options
Configuration option = Default value Description
[DEFAULT]  
advertise_mtu = True (Boolean) DEPRECATED: If True, advertise network MTU values if core plugin calculates them. MTU is advertised to running instances via DHCP and RA MTU options.
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq (String) The driver used to manage the DHCP server.
dnsmasq_base_log_dir = None (String) Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful for debugging issues with either DHCP or DNS. If this section is null, disable dnsmasq log.
dnsmasq_config_file = (String) Override the default dnsmasq settings with this file.
dnsmasq_dns_servers = (List) Comma-separated list of the DNS servers which will be used as forwarders.
dnsmasq_lease_max = 16777216 (Integer) Limit number of leases to prevent a denial-of-service.
dnsmasq_local_resolv = False (Boolean) Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. Effectively removes the ‘–no-resolv’ option from the dnsmasq process arguments. Adding custom DNS resolvers to the ‘dnsmasq_dns_servers’ option disables this feature.
enable_isolated_metadata = False (Boolean) The DHCP server can assist with providing metadata support on isolated networks. Setting this value to True will cause the DHCP server to append specific host routes to the DHCP request. The metadata service will only be activated when the subnet does not contain any router port. The guest instance must be configured to request host routes via DHCP (Option 121). This option doesn’t have any effect when force_metadata is set to True.
enable_metadata_network = False (Boolean) Allows for serving metadata requests coming from a dedicated metadata access network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected to a Neutron router from which the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in VMs, as they will be able to reach 169.254.169.254 through a router. This option requires enable_isolated_metadata = True.
force_metadata = False (Boolean) In some cases the Neutron router is not present to provide the metadata IP but the DHCP server can be used to provide this info. Setting this value will force the DHCP server to append specific host routes to the DHCP request. If this option is set, then the metadata service will be activated for all the networks.
host = example.domain (String) Hostname to be used by the Neutron server, agents and services running on this machine. All the agents and services running on this machine must use the same host value.
interface_driver = None (String) The driver used to manage the virtual interface.
num_sync_threads = 4 (Integer) Number of threads to use during sync process. Should not exceed connection pool size configured on server.
resync_interval = 5 (Integer) The DHCP agent will resync its state with Neutron to recover from any transient notification or RPC errors. The interval is number of seconds between attempts.
Distributed virtual router

Use the following options to alter DVR-related settings.

Description of DVR configuration options
Configuration option = Default value Description
[DEFAULT]  
dvr_base_mac = fa:16:3f:00:00:00 (String) The base mac address used for unique DVR instances by Neutron. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. The ‘dvr_base_mac’ must be different from ‘base_mac’ to avoid mixing them up with MAC’s allocated for tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00. The default is 3 octet
router_distributed = False (Boolean) System-wide flag to determine the type of router that tenants can create. Only admin can override.
IPv6 router advertisement

Use the following options to alter IPv6 RA settings.

Description of IPv6 router advertisement configuration options
Configuration option = Default value Description
[DEFAULT]  
ra_confs = $state_path/ra (String) Location to store IPv6 RA config files
L3 agent

Use the following options in the l3_agent.ini file for the L3 agent.

Description of L3 agent configuration options
Configuration option = Default value Description
[DEFAULT]  
enable_snat_by_default = True (Boolean) Define the default value of enable_snat if not provided in external_gateway_info.
external_network_bridge = (String) DEPRECATED: Name of bridge used for external network traffic. When this parameter is set, the L3 agent will plug an interface directly into an external bridge which will not allow any wiring by the L2 agent. Using this will result in incorrect port statuses. This option is deprecated and will be removed in Ocata.
ha_confs_path = $state_path/ha_confs (String) Location to store keepalived/conntrackd config files
ha_vrrp_advert_int = 2 (Integer) The advertisement interval in seconds
ha_vrrp_auth_password = None (String) VRRP authentication password
ha_vrrp_auth_type = PASS (String) VRRP authentication type
host = example.domain (String) Hostname to be used by the Neutron server, agents and services running on this machine. All the agents and services running on this machine must use the same host value.
interface_driver = None (String) The driver used to manage the virtual interface.
ipv6_pd_enabled = False (Boolean) Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable environment. Users making subnet creation requests for IPv6 subnets without providing a CIDR or subnetpool ID will be given a CIDR via the Prefix Delegation mechanism. Note that enabling PD will override the behavior of the default IPv6 subnetpool.
l3_ha = False (Boolean) Enable HA mode for virtual routers.
l3_ha_net_cidr = 169.254.192.0/18 (String) Subnet used for the l3 HA admin network.
l3_ha_network_physical_name = (String) The physical network name with which the HA network can be created.
l3_ha_network_type = (String) The network type to use when creating the HA network for an HA router. By default or if empty, the first ‘tenant_network_types’ is used. This is helpful when the VRRP traffic should use a specific network which is not the default one.
max_l3_agents_per_router = 3 (Integer) Maximum number of L3 agents which a HA router will be scheduled on. If it is set to 0 then the router will be scheduled on every agent.
min_l3_agents_per_router = 2 (Integer) DEPRECATED: Minimum number of L3 agents that have to be available in order to allow a new HA router to be scheduled. This option is deprecated in the Newton release and will be removed for the Ocata release where the scheduling of new HA routers will always be allowed.
[AGENT]  
comment_iptables_rules = True (Boolean) Add comments to iptables rules. Set to false to disallow the addition of comments to generated iptables rules that describe each rule’s purpose. System must support the iptables comments module for addition of comments.
use_helper_for_ns_read = True (Boolean) Use the root helper when listing the namespaces on a system. This may not be required depending on the security configuration. If the root helper is not required, set this to False for a performance improvement.
Metadata Agent

Use the following options in the metadata_agent.ini file for the Metadata agent.

Description of metadata configuration options
Configuration option = Default value Description
[DEFAULT]  
metadata_backlog = 4096 (Integer) Number of backlog requests to configure the metadata server socket with
metadata_proxy_group = (String) Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group).
metadata_proxy_shared_secret = (String) When proxying metadata requests, Neutron signs the Instance-ID header with a shared secret to prevent spoofing. You may select any string for a secret, but it must match here and in the configuration used by the Nova Metadata Server. NOTE: Nova uses the same config key, but in [neutron] section.
metadata_proxy_socket = $state_path/metadata_proxy (String) Location of Metadata Proxy UNIX domain socket
metadata_proxy_socket_mode = deduce (String) Metadata Proxy UNIX domain socket mode, 4 values allowed: ‘deduce’: deduce mode from metadata_proxy_user/group values, ‘user’: set metadata proxy socket mode to 0o644, to use when metadata_proxy_user is agent effective user or root, ‘group’: set metadata proxy socket mode to 0o664, to use when metadata_proxy_group is agent effective group or root, ‘all’: set metadata proxy socket mode to 0o666, to use otherwise.
metadata_proxy_user = (String) User (uid or name) running metadata proxy after its initialization (if empty: agent effective user).
metadata_proxy_watch_log = None (Boolean) Enable/Disable log watch by metadata proxy. It should be disabled when metadata_proxy_user/group is not allowed to read/write its log file and copytruncate logrotate option must be used if logrotate is enabled on metadata proxy log files. Option default value is deduced from metadata_proxy_user: watch log is enabled if metadata_proxy_user is agent effective user id/name.
metadata_workers = 0 (Integer) Number of separate worker processes for metadata server (defaults to half of the number of CPUs)
nova_metadata_insecure = False (Boolean) Allow to perform insecure SSL (https) requests to nova metadata
nova_metadata_ip = 127.0.0.1 (String) IP address used by Nova metadata server.
nova_metadata_port = 8775 (Port number) TCP Port used by Nova metadata server.
nova_metadata_protocol = http (String) Protocol to access nova metadata, http or https

Note

Previously, the neutron metadata agent connected to a neutron server via REST API using a neutron client. This is ineffective because keystone is then fully involved into the authentication process and gets overloaded.

The neutron metadata agent has been reworked to use RPC by default to connect to a server since Kilo release. This is a typical way of interacting between neutron server and its agents. If neutron server does not support metadata RPC then neutron client will be used.

Warning

Do not run the neutron-ns-metadata-proxy proxy namespace as root on a node with the L3 agent running. In OpenStack Kilo and newer, you can change the permissions of neutron-ns-metadata-proxy after the proxy installation using the metadata_proxy_user and metadata_proxy_group options.

Metering Agent

Use the following options in the metering_agent.ini file for the Metering agent.

Description of metering agent configuration options
Configuration option = Default value Description
[DEFAULT]  
driver = neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver (String) Metering driver
measure_interval = 30 (Integer) Interval between two metering measures
[AGENT]  
report_interval = 30 (Floating point) Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time.
Nova

Use the following options in the neutron.conf file to change nova-related settings.

Description of nova configuration options
Configuration option = Default value Description
[nova]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
endpoint_type = public (String) Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
region_name = None (String) Name of nova region to use. Useful if keystone manages more than one region.
timeout = None (Integer) Timeout value for http requests
Policy

Use the following options in the neutron.conf file to change policy settings.

Description of policy configuration options
Configuration option = Default value Description
[DEFAULT]  
allow_overlapping_ips = False (Boolean) Allow overlapping IP support in Neutron. Attention: the following parameter MUST be set to False if Neutron is being used in conjunction with Nova security groups.
Quotas

Use the following options in the neutron.conf file for the quota system.

Description of quotas configuration options
Configuration option = Default value Description
[DEFAULT]  
max_routes = 30 (Integer) Maximum number of routes per router
[QUOTAS]  
default_quota = -1 (Integer) Default number of resource allowed per tenant. A negative value means unlimited.
quota_driver = neutron.db.quota.driver.DbQuotaDriver (String) Default driver to use for quota checks.
quota_firewall = 10 (Integer) Number of firewalls allowed per tenant. A negative value means unlimited.
quota_firewall_policy = 10 (Integer) Number of firewall policies allowed per tenant. A negative value means unlimited.
quota_firewall_rule = 100 (Integer) Number of firewall rules allowed per tenant. A negative value means unlimited.
quota_floatingip = 50 (Integer) Number of floating IPs allowed per tenant. A negative value means unlimited.
quota_healthmonitor = -1 (Integer) Number of health monitors allowed per tenant. A negative value means unlimited.
quota_listener = -1 (Integer) Number of Loadbalancer Listeners allowed per tenant. A negative value means unlimited.
quota_loadbalancer = 10 (Integer) Number of LoadBalancers allowed per tenant. A negative value means unlimited.
quota_member = -1 (Integer) Number of pool members allowed per tenant. A negative value means unlimited.
quota_network = 10 (Integer) Number of networks allowed per tenant. A negative value means unlimited.
quota_pool = 10 (Integer) Number of pools allowed per tenant. A negative value means unlimited.
quota_port = 50 (Integer) Number of ports allowed per tenant. A negative value means unlimited.
quota_rbac_policy = 10 (Integer) Default number of RBAC entries allowed per tenant. A negative value means unlimited.
quota_router = 10 (Integer) Number of routers allowed per tenant. A negative value means unlimited.
quota_security_group = 10 (Integer) Number of security groups allowed per tenant. A negative value means unlimited.
quota_security_group_rule = 100 (Integer) Number of security rules allowed per tenant. A negative value means unlimited.
quota_subnet = 10 (Integer) Number of subnets allowed per tenant, A negative value means unlimited.
track_quota_usage = True (Boolean) Keep in track in the database of current resource quota usage. Plugins which do not leverage the neutron database should set this flag to False.
Scheduler

Use the following options in the neutron.conf file to change scheduler settings.

Description of scheduler configuration options
Configuration option = Default value Description
[DEFAULT]  
network_auto_schedule = True (Boolean) Allow auto scheduling networks to DHCP agent.
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler (String) Driver to use for scheduling network to DHCP agent
router_auto_schedule = True (Boolean) Allow auto scheduling of routers to L3 agent.
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler (String) Driver to use for scheduling router to a default L3 agent
Security Groups

Use the following options in the configuration file for your driver to change security group settings.

Description of security groups configuration options
Configuration option = Default value Description
[SECURITYGROUP]  
enable_ipset = True (Boolean) Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node.
enable_security_group = True (Boolean) Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API.
firewall_driver = None (String) Driver for security groups firewall in the L2 agent

Note

Now Networking uses iptables to achieve security group functions. In L2 agent with enable_ipset option enabled, it makes use of IPset to improve security group’s performance, as it represents a hash set which is insensitive to the number of elements.

When a port is created, L2 agent will add an additional IPset chain to it’s iptables chain, if the security group that this port belongs to has rules between other security group, the member of that security group will be added to the ipset chain.

If a member of a security group is changed, it used to reload iptables rules which is expensive. However, when IPset option is enabled on L2 agent, it does not need to reload iptables if only members of security group were changed, it should just update an IPset.

Note

A single default security group has been introduced in order to avoid race conditions when creating a tenant’s default security group. The race conditions are caused by the uniqueness check of a new security group name. A table default_security_group implements such a group. It has tenant_id field as a primary key and security_group_id, which is an identifier of a default security group. The migration that introduces this table has a sanity check that verifies if a default security group is not duplicated in any tenant.

Misc
Description of FDB agent configuration options
Configuration option = Default value Description
[FDB]  
shared_physical_device_mappings = (List) Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent’s node-specific shared physical network device between SR-IOV and OVS or SR-IOV and linux bridge
Description of QoS configuration options
Configuration option = Default value Description
[QOS]  
kernel_hz = 250 (Integer) Value of host kernel tick rate (hz) for calculating minimum burst value in bandwidth limit rules for a port with QoS. See kernel configuration file for HZ value and tc-tbf manual for more information.
tbf_latency = 50 (Integer) Value of latency (ms) for calculating size of queue for a port with QoS. See tc-tbf manual for more information.

Firewall-as-a-Service configuration options

Use the following options in the fwaas_driver.ini file for the FWaaS driver.

Description of Firewall-as-a-Service configuration options
Configuration option = Default value Description
[fwaas]  
agent_version = v1 (String) Firewall agent class
driver = (String) Name of the FWaaS Driver
enabled = False (Boolean) Enable FWaaS

Load-Balancer-as-a-Service configuration options

Use the following options in the neutron_lbaas.conf file for the LBaaS agent.

Note

The common configurations for shared services and libraries, such as database connections and RPC messaging, are described at Common configurations.

Description of Load-Balancer-as-a-Service configuration options
Configuration option = Default value Description
[certificates]  
barbican_auth = barbican_acl_auth (String) Name of the Barbican authentication method to use
cert_manager_type = barbican (String) Certificate Manager plugin. Defaults to barbican.
storage_path = /var/lib/neutron-lbaas/certificates/ (String) Absolute path to the certificate storage directory. Defaults to env[OS_LBAAS_TLS_STORAGE].

Use the following options in the lbaas_agent.ini file for the LBaaS agent.

Description of LBaaS agent configuration options
Configuration option = Default value Description
[DEFAULT]  
debug = False (Boolean) If set to true, the logging level will be set to DEBUG instead of the default INFO level. Mutable This option can be changed without restarting.
device_driver = ['neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver'] (Multi-valued) Drivers used to manage loadbalancing devices
interface_driver = None (String) The driver used to manage the virtual interface.
periodic_interval = 40 (Integer) Seconds between running periodic tasks.
[haproxy]  
loadbalancer_state_path = $state_path/lbaas (String) Location to store config and state files
send_gratuitous_arp = 3 (Integer) When delete and re-add the same vip, send this many gratuitous ARPs to flush the ARP cache in the Router. Set it below or equal to 0 to disable this feature.
user_group = nogroup (String) The user group

Use the following options in the services_lbaas.conf file for the LBaaS agent.

Description of LBaaS Embrane, Radware, NetScaler, HAproxy, octavia plug-in configuration options
Configuration option = Default value Description
[DEFAULT]  
loadbalancer_scheduler_driver = neutron_lbaas.agent_scheduler.ChanceScheduler (String) Driver to use for scheduling to a default loadbalancer agent
[haproxy]  
jinja_config_template = /usr/lib/python/site-packages/neutron-lbaas/neutron_lbaas/drivers/haproxy/templates/haproxy.loadbalancer.j2 (String) Jinja template file for haproxy configuration
[octavia]  
allocates_vip = False (Boolean) True if Octavia will be responsible for allocating the VIP. False if neutron-lbaas will allocate it and pass to Octavia.
base_url = http://127.0.0.1:9876 (String) URL of Octavia controller root
request_poll_interval = 3 (Integer) Interval in seconds to poll octavia when an entity is created, updated, or deleted.
request_poll_timeout = 100 (Integer) Time to stop polling octavia when a status of an entity does not change.
[radwarev2]  
child_workflow_template_names = manage_l3 (List) Name of child workflow templates used.Default: manage_l3
ha_secondary_address = None (String) IP address of secondary vDirect server.
service_adc_type = VA (String) Service ADC type. Default: VA.
service_adc_version = (String) Service ADC version.
service_cache = 20 (Integer) Size of service cache. Default: 20.
service_compression_throughput = 100 (Integer) Service compression throughput. Default: 100.
service_ha_pair = False (Boolean) Enables or disables the Service HA pair. Default: False.
service_isl_vlan = -1 (Integer) A required VLAN for the interswitch link to use.
service_resource_pool_ids = (List) Resource pool IDs.
service_session_mirroring_enabled = False (Boolean) Enable or disable Alteon interswitch link for stateful session failover. Default: False.
service_ssl_throughput = 100 (Integer) Service SSL throughput. Default: 100.
service_throughput = 1000 (Integer) Service throughput. Default: 1000.
stats_action_name = stats (String) Name of the workflow action for statistics. Default: stats.
vdirect_address = None (String) IP address of vDirect server.
vdirect_password = radware (String) vDirect user password.
vdirect_user = vDirect (String) vDirect user name.
workflow_action_name = apply (String) Name of the workflow action. Default: apply.
workflow_params = {'data_ip_address': '192.168.200.99', 'ha_network_name': 'HA-Network', 'ha_port': 2, 'allocate_ha_ips': True, 'ha_ip_pool_name': 'default', 'allocate_ha_vrrp': True, 'data_port': 1, 'gateway': '192.168.200.1', 'twoleg_enabled': '_REPLACE_', 'data_ip_mask': '255.255.255.0'} (Dict) Parameter for l2_l3 workflow constructor.
workflow_template_name = os_lb_v2 (String) Name of the workflow template. Default: os_lb_v2.
[radwarev2_debug]  
configure_l3 = True (Boolean) Configule ADC with L3 parameters?
configure_l4 = True (Boolean) Configule ADC with L4 parameters?
provision_service = True (Boolean) Provision ADC service?
Octavia configuration options

Octavia is an operator-grade open source load balancing solution. Use the following options in the /etc/octavia/octavia.conf file to configure the octavia service.

Description of authorization token configuration options
Configuration option = Default value Description
[keystone_authtoken_v3]  
admin_project_domain = default (String) Admin project keystone authentication domain
admin_user_domain = default (String) Admin user keystone authentication domain
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
allow_bulk = True (Boolean) Allow the usage of the bulk API
allow_pagination = False (Boolean) Allow the usage of the pagination
allow_sorting = False (Boolean) Allow the usage of the sorting
api_extensions_path = (String) The path for API extensions
api_handler = queue_producer (String) The handler that the API communicates with
api_paste_config = api-paste.ini (String) The API paste config file to use
auth_strategy = keystone (String) The type of authentication to use
bind_host = 127.0.0.1 (IP) The host IP to bind to
bind_port = 9876 (Port number) The port to bind to
control_exchange = octavia (String) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
host = localhost (String) The hostname Octavia is running on
octavia_plugins = hot_plug_plugin (String) Name of the controller plugin to use
pagination_max_limit = -1 (String) The maximum number of items returned in a single response. The string ‘infinite’ or a negative integer value means ‘no limit’
[amphora_agent]  
agent_server_ca = /etc/octavia/certs/client_ca.pem (String) The ca which signed the client certificates
agent_server_cert = /etc/octavia/certs/server.pem (String) The server certificate for the agent.py server to use
agent_server_network_dir = /etc/netns/amphora-haproxy/network/interfaces.d/ (String) The directory where new network interfaces are located
agent_server_network_file = None (String) The file where the network interfaces are located. Specifying this will override any value set for agent_server_network_dir.
amphora_id = None (String) The amphora ID.
[anchor]  
password = None (String) Anchor password
url = http://localhost:9999/v1/sign/default (String) Anchor URL
username = None (String) Anchor username
[certificates]  
barbican_auth = barbican_acl_auth (String) Name of the Barbican authentication method to use
ca_certificate = /etc/ssl/certs/ssl-cert-snakeoil.pem (String) Absolute path to the CA Certificate for signing. Defaults to env[OS_OCTAVIA_TLS_CA_CERT].
ca_private_key = /etc/ssl/private/ssl-cert-snakeoil.key (String) Absolute path to the Private Key for signing. Defaults to env[OS_OCTAVIA_TLS_CA_KEY].
ca_private_key_passphrase = None (String) Passphrase for the Private Key. Defaults to env[OS_OCTAVIA_CA_KEY_PASS] or None.
cert_generator = local_cert_generator (String) Name of the cert generator to use
cert_manager = barbican_cert_manager (String) Name of the cert manager to use
endpoint_type = publicURL (String) The endpoint_type to be used for barbican service.
region_name = None (String) Region in Identity service catalog to use for communication with the barbican service.
signing_digest = sha256 (String) Certificate signing digest. Defaults to env[OS_OCTAVIA_CA_SIGNING_DIGEST] or “sha256”.
storage_path = /var/lib/octavia/certificates/ (String) Absolute path to the certificate storage directory. Defaults to env[OS_OCTAVIA_TLS_STORAGE].
[controller_worker]  
amp_active_retries = 10 (Integer) Retry attempts to wait for Amphora to become active
amp_active_wait_sec = 10 (Integer) Seconds to wait between checks on whether an Amphora has become active
amp_boot_network_list = (List) List of networks to attach to the Amphorae. All networks defined in the list will be attached to each amphora.
amp_flavor_id = (String) Nova instance flavor id for the Amphora
amp_image_id = (String) DEPRECATED: Glance image id for the Amphora image to boot Superseded by amp_image_tag option.
amp_image_owner_id = (String) Restrict glance image selection to a specific owner ID. This is a recommended security setting.
amp_image_tag = (String) Glance image tag for the Amphora image to boot. Use this option to be able to update the image without reconfiguring Octavia. Ignored if amp_image_id is defined.
amp_network = (String) DEPRECATED: Network to attach to the Amphorae. Replaced by amp_boot_network_list.
amp_secgroup_list = (List) List of security groups to attach to the Amphora.
amp_ssh_access_allowed = True (Boolean) Determines whether or not to allow access to the Amphorae
amp_ssh_key_name = (String) SSH key name used to boot the Amphora
amphora_driver = amphora_noop_driver (String) Name of the amphora driver to use
cert_generator = local_cert_generator (String) Name of the cert generator to use
client_ca = /etc/octavia/certs/ca_01.pem (String) Client CA for the amphora agent to use
compute_driver = compute_noop_driver (String) Name of the compute driver to use
loadbalancer_topology = SINGLE (String) Load balancer topology configuration. SINGLE - One amphora per load balancer. ACTIVE_STANDBY - Two amphora per load balancer.
network_driver = network_noop_driver (String) Name of the network driver to use
user_data_config_drive = False (Boolean) If True, build cloud-init user-data that is passed to the config drive on Amphora boot instead of personality files. If False, utilize personality files.
[glance]  
ca_certificates_file = None (String) CA certificates file path
endpoint = None (String) A new endpoint to override the endpoint in the keystone catalog.
endpoint_type = publicURL (String) Endpoint interface in identity service to use
insecure = False (Boolean) Disable certificate validation on SSL connections
region_name = None (String) Region in Identity service catalog to use for communication with the OpenStack services.
service_name = None (String) The name of the glance service in the keystone catalog
[haproxy_amphora]  
base_cert_dir = /var/lib/octavia/certs (String) Base directory for cert storage.
base_path = /var/lib/octavia (String) Base directory for amphora files.
bind_host = 0.0.0.0 (IP) The host IP to bind to
bind_port = 9443 (Port number) The port to bind to
client_cert = /etc/octavia/certs/client.pem (String) The client certificate to talk to the agent
connection_max_retries = 300 (Integer) Retry threshold for connecting to amphorae.
connection_retry_interval = 5 (Integer) Retry timeout between connection attempts in seconds.
haproxy_cmd = /usr/sbin/haproxy (String) The full path to haproxy
haproxy_stick_size = 10k (String) Size of the HAProxy stick table. Accepts k, m, g suffixes. Example: 10k
haproxy_template = None (String) Custom haproxy template.
respawn_count = 2 (Integer) The respawn count for haproxy’s upstart script
respawn_interval = 2 (Integer) The respawn interval for haproxy’s upstart script
rest_request_conn_timeout = 10 (Floating point) The time in seconds to wait for a REST API to connect.
rest_request_read_timeout = 60 (Floating point) The time in seconds to wait for a REST API response.
server_ca = /etc/octavia/certs/server_ca.pem (String) The ca which signed the server certificates
use_upstart = True (Boolean) If False, use sysvinit.
[health_manager]  
bind_ip = 127.0.0.1 (IP) IP address the controller will listen on for heart beats
bind_port = 5555 (Port number) Port number the controller will listen onfor heart beats
controller_ip_port_list = (List) List of controller ip and port pairs for the heartbeat receivers. Example 127.0.0.1:5555, 192.168.0.1:5555
event_streamer_driver = noop_event_streamer (String) Specifies which driver to use for the event_streamer for syncing the octavia and neutron_lbaas dbs. If you don’t need to sync the database or are running octavia in stand alone mode use the noop_event_streamer
failover_threads = 10 (Integer) Number of threads performing amphora failovers.
health_check_interval = 3 (Integer) Sleep time between health checks in seconds.
heartbeat_interval = 10 (Integer) Sleep time between sending heartbeats.
heartbeat_key = None (String) key used to validate amphora sendingthe message
heartbeat_timeout = 60 (Integer) Interval, in seconds, to wait before failing over an amphora.
sock_rlimit = 0 (Integer) sets the value of the heartbeat recv buffer
status_update_threads = 50 (Integer) Number of threads performing amphora status update.
[house_keeping]  
amphora_expiry_age = 604800 (Integer) Amphora expiry age in seconds
cert_expiry_buffer = 1209600 (Integer) Seconds until certificate expiration
cert_interval = 3600 (Integer) Certificate check interval in seconds
cert_rotate_threads = 10 (Integer) Number of threads performing amphora certificate rotation
cleanup_interval = 30 (Integer) DB cleanup interval in seconds
load_balancer_expiry_age = 604800 (Integer) Load balancer expiry age in seconds
spare_amphora_pool_size = 0 (Integer) Number of spare amphorae
spare_check_interval = 30 (Integer) Spare check interval in seconds
[keepalived_vrrp]  
vrrp_advert_int = 1 (Integer) Amphora role and priority advertisement interval in seconds.
vrrp_check_interval = 5 (Integer) VRRP health check script run interval in seconds.
vrrp_fail_count = 2 (Integer) Number of successive failures before transition to a fail state.
vrrp_garp_refresh_count = 2 (Integer) Number of gratuitous ARP announcements to make on each refresh interval.
vrrp_garp_refresh_interval = 5 (Integer) Time in seconds between gratuitous ARP announcements from the MASTER.
vrrp_success_count = 2 (Integer) Number of consecutive successes before transition to a success state.
[networking]  
lb_network_name = None (String) Name of amphora internal network
max_retries = 15 (Integer) The maximum attempts to retry an action with the networking service.
port_detach_timeout = 300 (Integer) Seconds to wait for a port to detach from an amphora.
retry_interval = 1 (Integer) Seconds to wait before retrying an action with the networking service.
[neutron]  
ca_certificates_file = None (String) CA certificates file path
endpoint = None (String) A new endpoint to override the endpoint in the keystone catalog.
endpoint_type = publicURL (String) Endpoint interface in identity service to use
insecure = False (Boolean) Disable certificate validation on SSL connections
region_name = None (String) Region in Identity service catalog to use for communication with the OpenStack services.
service_name = None (String) The name of the neutron service in the keystone catalog
[nova]  
ca_certificates_file = None (String) CA certificates file path
enable_anti_affinity = False (Boolean) Flag to indicate if nova anti-affinity feature is turned on.
endpoint = None (String) A new endpoint to override the endpoint in the keystone catalog.
endpoint_type = publicURL (String) Endpoint interface in identity service to use
insecure = False (Boolean) Disable certificate validation on SSL connections
region_name = None (String) Region in Identity service catalog to use for communication with the OpenStack services.
service_name = None (String) The name of the nova service in the keystone catalog
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[task_flow]  
engine = serial (String) TaskFlow engine to use
max_workers = 5 (Integer) The maximum number of workers
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.

VPN-as-a-Service configuration options

Use the following options in the vpnaas_agent.ini file for the VPNaaS agent.

Description of VPN-as-a-Service configuration options
Configuration option = Default value Description
[vpnagent]  
vpn_device_driver = ['neutron_vpnaas.services.vpn.device_drivers.ipsec.OpenSwanDriver, neutron_vpnaas.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver, neutron_vpnaas.services.vpn.device_drivers.vyatta_ipsec.VyattaIPSecDriver, neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.fedora_strongswan_ipsec.FedoraStrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver'] (Multi-valued) The vpn device drivers Neutron will use
Description of VPNaaS IPsec plug-in configuration options
Configuration option = Default value Description
[cisco_csr_ipsec]  
status_check_interval = 60 (Integer) Status check interval for Cisco CSR IPSec connections
[ipsec]  
config_base_dir = $state_path/ipsec (String) Location to store ipsec server config files
enable_detailed_logging = False (Boolean) Enable detail logging for ipsec pluto process. If the flag set to True, the detailed logging will be written into config_base_dir/<pid>/log. Note: This setting applies to OpenSwan and LibreSwan only. StrongSwan logs to syslog.
ipsec_status_check_interval = 60 (Integer) Interval for checking ipsec status
[pluto]  
restart_check_config = False (Boolean) Enable this flag to avoid from unnecessary restart
shutdown_check_back_off = 1.5 (Floating point) A factor to increase the retry interval for each retry
shutdown_check_retries = 5 (Integer) The maximum number of retries for checking for pluto daemon shutdown
shutdown_check_timeout = 1 (Integer) Initial interval in seconds for checking if pluto daemon is shutdown
Description of VPNaaS Openswan plug-in configuration options
Configuration option = Default value Description
[openswan]  
ipsec_config_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/openswan/ipsec.conf.template (String) Template file for ipsec configuration
ipsec_secret_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/openswan/ipsec.secret.template (String) Template file for ipsec secret configuration
Description of VPNaaS strongSwan plug-in configuration options
Configuration option = Default value Description
[strongswan]  
default_config_area = /etc/strongswan.d (String) The area where default StrongSwan configuration files are located.
ipsec_config_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.conf.template (String) Template file for ipsec configuration.
ipsec_secret_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.secret.template (String) Template file for ipsec secret configuration.
strongswan_config_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/strongswan.conf.template (String) Template file for strongswan configuration.

Note

strongSwan and Openswan cannot both be installed and enabled at the same time. The vpn_device_driver configuration option in the vpnaas_agent.ini file is an option that lists the VPN device drivers that the Networking service will use. You must choose either strongSwan or Openswan as part of the list.

Important

Ensure that your strongSwan version is 5 or newer.

To declare either one in the vpn_device_driver:

#Openswan
vpn_device_driver = ['neutron_vpnaas.services.vpn.device_drivers.ipsec.OpenSwanDriver']

#strongSwan
vpn_device_driver = ['neutron.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver']

Log files used by Networking

The corresponding log file of each Networking service is stored in the /var/log/neutron/ directory of the host on which each service runs.

Log files used by Networking services
Log file Service/interface
dhcp-agent.log neutron-dhcp-agent
l3-agent.log neutron-l3-agent
lbaas-agent.log neutron-lbaas-agent [1]
linuxbridge-agent.log neutron-linuxbridge-agent
metadata-agent.log neutron-metadata-agent
metering-agent.log neutron-metering-agent
openvswitch-agent.log neutron-openvswitch-agent
server.log neutron-server
[1]The neutron-lbaas-agent service only runs when Load-Balancer-as-a-Service is enabled.

Networking sample configuration files

The Networking service implements automatic generation of configuration files. This guide contains a snapshot of common configuration files for convenience. However, consider generating the latest configuration files by cloning the neutron repository and running the tools/generate_config_file_samples.sh script. Distribution packages should include sample configuration files for a particular release. Generally, these files reside in the /etc/neutron directory structure.

neutron.conf

The neutron.conf file contains the majority of Networking service options common to all components.

[DEFAULT]

#
# From neutron
#

# Where to store Neutron state files. This directory must be writable by the
# agent. (string value)
#state_path = /var/lib/neutron

# The host IP to bind to (string value)
#bind_host = 0.0.0.0

# The port to bind to (port value)
# Minimum value: 0
# Maximum value: 65535
#bind_port = 9696

# The path for API extensions. Note that this can be a colon-separated list of
# paths. For example: api_extensions_path =
# extensions:/path/to/more/exts:/even/more/exts. The __path__ of
# neutron.extensions is appended to this, so if your extensions are in there
# you don't need to specify them here. (string value)
#api_extensions_path =

# The type of authentication to use (string value)
#auth_strategy = keystone

# The core plugin Neutron will use (string value)
#core_plugin = <None>

# The service plugins Neutron will use (list value)
#service_plugins =

# The base MAC address Neutron will use for VIFs. The first 3 octets will
# remain unchanged. If the 4th octet is not 00, it will also be used. The
# others will be randomly generated. (string value)
#base_mac = fa:16:3e:00:00:00

# DEPRECATED: How many times Neutron will retry MAC generation. This option is
# now obsolete and so is deprecated to be removed in the Ocata release.
# (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#mac_generation_retries = 16

# Allow the usage of the bulk API (boolean value)
#allow_bulk = true

# DEPRECATED: Allow the usage of the pagination. This option has been
# deprecated and will now be enabled unconditionally. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#allow_pagination = true

# DEPRECATED: Allow the usage of the sorting. This option has been deprecated
# and will now be enabled unconditionally. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#allow_sorting = true

# The maximum number of items returned in a single response, value was
# 'infinite' or negative integer means no limit (string value)
#pagination_max_limit = -1

# Default value of availability zone hints. The availability zone aware
# schedulers use this when the resources availability_zone_hints is empty.
# Multiple availability zones can be specified by a comma separated string.
# This value can be empty. In this case, even if availability_zone_hints for a
# resource is empty, availability zone is considered for high availability
# while scheduling the resource. (list value)
#default_availability_zones =

# Maximum number of DNS nameservers per subnet (integer value)
#max_dns_nameservers = 5

# Maximum number of host routes per subnet (integer value)
#max_subnet_host_routes = 20

# DEPRECATED: Maximum number of fixed ips per port. This option is deprecated
# and will be removed in the N release. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#max_fixed_ips_per_port = 5

# Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to
# True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable
# environment. Users making subnet creation requests for IPv6 subnets without
# providing a CIDR or subnetpool ID will be given a CIDR via the Prefix
# Delegation mechanism. Note that enabling PD will override the behavior of the
# default IPv6 subnetpool. (boolean value)
#ipv6_pd_enabled = false

# DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite
# lease times. (integer value)
# Deprecated group/name - [DEFAULT]/dhcp_lease_time
#dhcp_lease_duration = 86400

# Domain to use for building the hostnames (string value)
#dns_domain = openstacklocal

# Driver for external DNS integration. (string value)
#external_dns_driver = <None>

# Allow sending resource operation notification to DHCP agent (boolean value)
#dhcp_agent_notification = true

# Allow overlapping IP support in Neutron. Attention: the following parameter
# MUST be set to False if Neutron is being used in conjunction with Nova
# security groups. (boolean value)
#allow_overlapping_ips = false

# Hostname to be used by the Neutron server, agents and services running on
# this machine. All the agents and services running on this machine must use
# the same host value. (string value)
#host = example.domain

# Send notification to nova when port status changes (boolean value)
#notify_nova_on_port_status_changes = true

# Send notification to nova when port data (fixed_ips/floatingip) changes so
# nova can update its cache. (boolean value)
#notify_nova_on_port_data_changes = true

# Number of seconds between sending events to nova if there are any events to
# send. (integer value)
#send_events_interval = 2

# DEPRECATED: If True, advertise network MTU values if core plugin calculates
# them. MTU is advertised to running instances via DHCP and RA MTU options.
# (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#advertise_mtu = true

# Neutron IPAM (IP address management) driver to use. By default, the reference
# implementation of the Neutron IPAM driver is used. (string value)
#ipam_driver = internal

# If True, then allow plugins that support it to create VLAN transparent
# networks. (boolean value)
#vlan_transparent = false

# This will choose the web framework in which to run the Neutron API server.
# 'pecan' is a new experimental rewrite of the API server. (string value)
# Allowed values: legacy, pecan
#web_framework = legacy

# MTU of the underlying physical network. Neutron uses this value to calculate
# MTU for all virtual network components. For flat and VLAN networks, neutron
# uses this value without modification. For overlay networks such as VXLAN,
# neutron automatically subtracts the overlay protocol overhead from this
# value. Defaults to 1500, the standard value for Ethernet. (integer value)
# Deprecated group/name - [ml2]/segment_mtu
#global_physnet_mtu = 1500

# Number of backlog requests to configure the socket with (integer value)
#backlog = 4096

# Number of seconds to keep retrying to listen (integer value)
#retry_until_window = 30

# Enable SSL on the API server (boolean value)
#use_ssl = false

# Seconds between running periodic tasks. (integer value)
#periodic_interval = 40

# Number of separate API worker processes for service. If not specified, the
# default is equal to the number of CPUs available for best performance.
# (integer value)
#api_workers = <None>

# Number of RPC worker processes for service. (integer value)
#rpc_workers = 1

# Number of RPC worker processes dedicated to state reports queue. (integer
# value)
#rpc_state_report_workers = 1

# Range of seconds to randomly delay when starting the periodic task scheduler
# to reduce stampeding. (Disable by setting to 0) (integer value)
#periodic_fuzzy_delay = 5

#
# From neutron.agent
#

# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>

# Location for Metadata Proxy UNIX domain socket. (string value)
#metadata_proxy_socket = $state_path/metadata_proxy

# User (uid or name) running metadata proxy after its initialization (if empty:
# agent effective user). (string value)
#metadata_proxy_user =

# Group (gid or name) running metadata proxy after its initialization (if
# empty: agent effective group). (string value)
#metadata_proxy_group =

# Enable/Disable log watch by metadata proxy. It should be disabled when
# metadata_proxy_user/group is not allowed to read/write its log file and
# copytruncate logrotate option must be used if logrotate is enabled on
# metadata proxy log files. Option default value is deduced from
# metadata_proxy_user: watch log is enabled if metadata_proxy_user is agent
# effective user id/name. (boolean value)
#metadata_proxy_watch_log = <None>

#
# From neutron.db
#

# Seconds to regard the agent is down; should be at least twice
# report_interval, to be sure the agent is down for good. (integer value)
#agent_down_time = 75

# Representing the resource type whose load is being reported by the agent.
# This can be "networks", "subnets" or "ports". When specified (Default is
# networks), the server will extract particular load sent as part of its agent
# configuration object from the agent report state, which is the number of
# resources being consumed, at every report_interval.dhcp_load_type can be used
# in combination with network_scheduler_driver =
# neutron.scheduler.dhcp_agent_scheduler.WeightScheduler When the
# network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured
# to represent the choice for the resource being balanced. Example:
# dhcp_load_type=networks (string value)
# Allowed values: networks, subnets, ports
#dhcp_load_type = networks

# Agent starts with admin_state_up=False when enable_new_agents=False. In the
# case, user's resources will not be scheduled automatically to the agent until
# admin changes admin_state_up to True. (boolean value)
#enable_new_agents = true

# Maximum number of routes per router (integer value)
#max_routes = 30

# Define the default value of enable_snat if not provided in
# external_gateway_info. (boolean value)
#enable_snat_by_default = true

# Driver to use for scheduling network to DHCP agent (string value)
#network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler

# Allow auto scheduling networks to DHCP agent. (boolean value)
#network_auto_schedule = true

# Automatically remove networks from offline DHCP agents. (boolean value)
#allow_automatic_dhcp_failover = true

# Number of DHCP agents scheduled to host a tenant network. If this number is
# greater than 1, the scheduler automatically assigns multiple DHCP agents for
# a given tenant network, providing high availability for DHCP service.
# (integer value)
#dhcp_agents_per_network = 1

# Enable services on an agent with admin_state_up False. If this option is
# False, when admin_state_up of an agent is turned False, services on it will
# be disabled. Agents with admin_state_up False are not selected for automatic
# scheduling regardless of this option. But manual scheduling to such agents is
# available if this option is True. (boolean value)
#enable_services_on_agents_with_admin_state_down = false

# The base mac address used for unique DVR instances by Neutron. The first 3
# octets will remain unchanged. If the 4th octet is not 00, it will also be
# used. The others will be randomly generated. The 'dvr_base_mac' *must* be
# different from 'base_mac' to avoid mixing them up with MAC's allocated for
# tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00.
# The default is 3 octet (string value)
#dvr_base_mac = fa:16:3f:00:00:00

# System-wide flag to determine the type of router that tenants can create.
# Only admin can override. (boolean value)
#router_distributed = false

# Driver to use for scheduling router to a default L3 agent (string value)
#router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler

# Allow auto scheduling of routers to L3 agent. (boolean value)
#router_auto_schedule = true

# Automatically reschedule routers from offline L3 agents to online L3 agents.
# (boolean value)
#allow_automatic_l3agent_failover = false

# Enable HA mode for virtual routers. (boolean value)
#l3_ha = false

# Maximum number of L3 agents which a HA router will be scheduled on. If it is
# set to 0 then the router will be scheduled on every agent. (integer value)
#max_l3_agents_per_router = 3

# DEPRECATED: Minimum number of L3 agents that have to be available in order to
# allow a new HA router to be scheduled. This option is deprecated in the
# Newton release and will be removed for the Ocata release where the scheduling
# of new HA routers will always be allowed. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#min_l3_agents_per_router = 2

# Subnet used for the l3 HA admin network. (string value)
#l3_ha_net_cidr = 169.254.192.0/18

# The network type to use when creating the HA network for an HA router. By
# default or if empty, the first 'tenant_network_types' is used. This is
# helpful when the VRRP traffic should use a specific network which is not the
# default one. (string value)
#l3_ha_network_type =

# The physical network name with which the HA network can be created. (string
# value)
#l3_ha_network_physical_name =

#
# From neutron.extensions
#

# Maximum number of allowed address pairs (integer value)
#max_allowed_address_pair = 10

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false

# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>

# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit

# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = neutron

#
# From oslo.service.wsgi
#

# File name for the paste.deploy config for api service (string value)
#api_paste_config = api-paste.ini

# A python format string that is used as the template to generate log lines.
# The following values can beformatted into it: client_ip, date_time,
# request_line, status_code, body_length, wall_seconds. (string value)
#wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f

# Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not
# supported on OS X. (integer value)
#tcp_keepidle = 600

# Size of the pool of greenthreads used by wsgi (integer value)
#wsgi_default_pool_size = 100

# Maximum line size of message headers to be accepted. max_header_line may need
# to be increased when using large tokens (typically those generated when
# keystone is configured to use PKI tokens with big service catalogs). (integer
# value)
#max_header_line = 16384

# If False, closes the client socket connection explicitly. (boolean value)
#wsgi_keep_alive = true

# Timeout for client connections' socket operations. If an incoming connection
# is idle for this number of seconds it will be closed. A value of '0' means
# wait forever. (integer value)
#client_socket_timeout = 900


[agent]

#
# From neutron.agent
#

# Root helper application. Use 'sudo neutron-rootwrap
# /etc/neutron/rootwrap.conf' to use the real root filter facility. Change to
# 'sudo' to skip the filtering and just run the command directly. (string
# value)
#root_helper = sudo

# Use the root helper when listing the namespaces on a system. This may not be
# required depending on the security configuration. If the root helper is not
# required, set this to False for a performance improvement. (boolean value)
#use_helper_for_ns_read = true

# Root helper daemon application to use when possible. (string value)
#root_helper_daemon = <None>

# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30

# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false

# Add comments to iptables rules. Set to false to disallow the addition of
# comments to generated iptables rules that describe each rule's purpose.
# System must support the iptables comments module for addition of comments.
# (boolean value)
#comment_iptables_rules = true

# Duplicate every iptables difference calculation to ensure the format being
# generated matches the format of iptables-save. This option should not be
# turned on for production systems because it imposes a performance penalty.
# (boolean value)
#debug_iptables_rules = false

# Action to be executed when a child process dies (string value)
# Allowed values: respawn, exit
#check_child_processes_action = respawn

# Interval between checks of child process liveness (seconds), use 0 to disable
# (integer value)
#check_child_processes_interval = 60

# Availability zone of this node (string value)
#availability_zone = nova


[cors]

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID


[cors.subdomain]

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID


[database]

#
# From neutron.db
#

# Database engine for which script will be generated when using offline
# migration. (string value)
#engine =

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20


[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should  ideally be unversioned, client support in the
# wild varies.  If you're using a versioned v2 endpoint here, then this  should
# *not* be the same endpoint the service user utilizes  for validating tokens,
# because normal end users may not be  able to reach that endpoint. (string
# value)
#auth_uri = <None>

# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>

# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>

# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3

# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>

# Required if identity server requires client certificate (string value)
#certfile = <None>

# Required if identity server requires client certificate (string value)
#keyfile = <None>

# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# The region in which the identity server can be found. (string value)
#region_name = <None>

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>

# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>

# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance. Only
# valid for PKI tokens. (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None

# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>

# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5

# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>


[matchmaker_redis]

#
# From oslo.messaging
#

# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1

# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379

# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =

# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =

# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq

# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000

# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000

# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000


[nova]

#
# From neutron
#

# Name of nova region to use. Useful if keystone manages more than one region.
# (string value)
#region_name = <None>

# Type of the nova endpoint to use.  This endpoint will be looked up in the
# keystone catalog and should be one of public, internal or admin. (string
# value)
# Allowed values: public, admin, internal
#endpoint_type = public

#
# From nova.auth
#

# Authentication URL (string value)
#auth_url = <None>

# Authentication type to load (string value)
# Deprecated group/name - [nova]/auth_plugin
#auth_type = <None>

# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# User's password (string value)
#password = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Project ID to scope to (string value)
# Deprecated group/name - [nova]/tenant-id
#project_id = <None>

# Project name to scope to (string value)
# Deprecated group/name - [nova]/tenant-name
#project_name = <None>

# Tenant ID (string value)
#tenant_id = <None>

# Tenant Name (string value)
#tenant_name = <None>

# Timeout value for http requests (integer value)
#timeout = <None>

# Trust ID (string value)
#trust_id = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User id (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [nova]/user-name
#username = <None>


[oslo_concurrency]

#
# From oslo.concurrency
#

# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false

# Directory to use for lock files.  For security, the specified directory
# should only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false

# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =

# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>

# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false

# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =

# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =

# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic

# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin

# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost

# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672

# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false

# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest

# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest

# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN

# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30

# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except  those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false

# Maximum number of channels to allow (integer value)
#channel_max = <None>

# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>

# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3

# Enable SSL (boolean value)
#ssl = <None>

# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>

# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25

# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25

# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25

# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single

# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30

# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0

# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30

# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600

# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60

# Persist notification messages. (boolean value)
#notification_persistence = false

# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25

# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60

# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc

# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply

# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100

# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25

# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25


[oslo_messaging_zmq]

#
# From oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d


[qos]

#
# From neutron.qos
#

# Drivers list to use to send the update notification (list value)
#notification_drivers = message_queue


[quotas]

#
# From neutron
#

# Default number of resource allowed per tenant. A negative value means
# unlimited. (integer value)
#default_quota = -1

# Number of networks allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_network = 10

# Number of subnets allowed per tenant, A negative value means unlimited.
# (integer value)
#quota_subnet = 10

# Number of ports allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_port = 50

# Default driver to use for quota checks. (string value)
#quota_driver = neutron.db.quota.driver.DbQuotaDriver

# Keep in track in the database of current resource quota usage. Plugins which
# do not leverage the neutron database should set this flag to False. (boolean
# value)
#track_quota_usage = true

#
# From neutron.extensions
#

# Number of routers allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_router = 10

# Number of floating IPs allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_floatingip = 50

# Number of security groups allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_security_group = 10

# Number of security rules allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_security_group_rule = 100


[ssl]

#
# From oslo.service.sslutils
#

# CA certificate file to use to verify connecting clients. (string value)
# Deprecated group/name - [DEFAULT]/ssl_ca_file
#ca_file = <None>

# Certificate file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_cert_file
#cert_file = <None>

# Private key file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_key_file
#key_file = <None>

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
#version = <None>

# Sets the list of available ciphers. value should be a string in the OpenSSL
# cipher list format. (string value)
#ciphers = <None>
api-paste.ini

The api-paste.ini file contains configuration for the web services gateway interface (WSGI).

[composite:neutron]
use = egg:Paste#urlmap
/: neutronversions_composite
/v2.0: neutronapi_v2_0

[composite:neutronapi_v2_0]
use = call:neutron.auth:pipeline_factory
noauth = cors request_id catch_errors extensions neutronapiapp_v2_0
keystone = cors request_id catch_errors authtoken keystonecontext extensions neutronapiapp_v2_0

[composite:neutronversions_composite]
use = call:neutron.auth:pipeline_factory
noauth = cors neutronversions
keystone = cors neutronversions

[filter:request_id]
paste.filter_factory = oslo_middleware:RequestId.factory

[filter:catch_errors]
paste.filter_factory = oslo_middleware:CatchErrors.factory

[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = neutron

[filter:keystonecontext]
paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory

[filter:extensions]
paste.filter_factory = neutron.api.extensions:plugin_aware_extension_middleware_factory

[app:neutronversions]
paste.app_factory = neutron.api.versions:Versions.factory

[app:neutronapiapp_v2_0]
paste.app_factory = neutron.api.v2.router:APIRouter.factory

[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
policy.json

The policy.json defines API access policy.

{
    "context_is_admin":  "role:admin",
    "owner": "tenant_id:%(tenant_id)s",
    "admin_or_owner": "rule:context_is_admin or rule:owner",
    "context_is_advsvc":  "role:advsvc",
    "admin_or_network_owner": "rule:context_is_admin or tenant_id:%(network:tenant_id)s",
    "admin_owner_or_network_owner": "rule:owner or rule:admin_or_network_owner",
    "admin_only": "rule:context_is_admin",
    "regular_user": "",
    "shared": "field:networks:shared=True",
    "shared_subnetpools": "field:subnetpools:shared=True",
    "shared_address_scopes": "field:address_scopes:shared=True",
    "external": "field:networks:router:external=True",
    "default": "rule:admin_or_owner",

    "create_subnet": "rule:admin_or_network_owner",
    "create_subnet:segment_id": "rule:admin_only",
    "create_subnet:service_types": "rule:admin_only",
    "get_subnet": "rule:admin_or_owner or rule:shared",
    "get_subnet:segment_id": "rule:admin_only",
    "update_subnet": "rule:admin_or_network_owner",
    "update_subnet:service_types": "rule:admin_only",
    "delete_subnet": "rule:admin_or_network_owner",

    "create_subnetpool": "",
    "create_subnetpool:shared": "rule:admin_only",
    "create_subnetpool:is_default": "rule:admin_only",
    "get_subnetpool": "rule:admin_or_owner or rule:shared_subnetpools",
    "update_subnetpool": "rule:admin_or_owner",
    "update_subnetpool:is_default": "rule:admin_only",
    "delete_subnetpool": "rule:admin_or_owner",

    "create_address_scope": "",
    "create_address_scope:shared": "rule:admin_only",
    "get_address_scope": "rule:admin_or_owner or rule:shared_address_scopes",
    "update_address_scope": "rule:admin_or_owner",
    "update_address_scope:shared": "rule:admin_only",
    "delete_address_scope": "rule:admin_or_owner",

    "create_network": "",
    "get_network": "rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc",
    "get_network:router:external": "rule:regular_user",
    "get_network:segments": "rule:admin_only",
    "get_network:provider:network_type": "rule:admin_only",
    "get_network:provider:physical_network": "rule:admin_only",
    "get_network:provider:segmentation_id": "rule:admin_only",
    "get_network:queue_id": "rule:admin_only",
    "get_network_ip_availabilities": "rule:admin_only",
    "get_network_ip_availability": "rule:admin_only",
    "create_network:shared": "rule:admin_only",
    "create_network:router:external": "rule:admin_only",
    "create_network:is_default": "rule:admin_only",
    "create_network:segments": "rule:admin_only",
    "create_network:provider:network_type": "rule:admin_only",
    "create_network:provider:physical_network": "rule:admin_only",
    "create_network:provider:segmentation_id": "rule:admin_only",
    "update_network": "rule:admin_or_owner",
    "update_network:segments": "rule:admin_only",
    "update_network:shared": "rule:admin_only",
    "update_network:provider:network_type": "rule:admin_only",
    "update_network:provider:physical_network": "rule:admin_only",
    "update_network:provider:segmentation_id": "rule:admin_only",
    "update_network:router:external": "rule:admin_only",
    "delete_network": "rule:admin_or_owner",

    "create_segment": "rule:admin_only",
    "get_segment": "rule:admin_only",
    "update_segment": "rule:admin_only",
    "delete_segment": "rule:admin_only",

    "network_device": "field:port:device_owner=~^network:",
    "create_port": "",
    "create_port:device_owner": "not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner",
    "create_port:mac_address": "rule:context_is_advsvc or rule:admin_or_network_owner",
    "create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner",
    "create_port:port_security_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
    "create_port:binding:host_id": "rule:admin_only",
    "create_port:binding:profile": "rule:admin_only",
    "create_port:mac_learning_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
    "create_port:allowed_address_pairs": "rule:admin_or_network_owner",
    "get_port": "rule:context_is_advsvc or rule:admin_owner_or_network_owner",
    "get_port:queue_id": "rule:admin_only",
    "get_port:binding:vif_type": "rule:admin_only",
    "get_port:binding:vif_details": "rule:admin_only",
    "get_port:binding:host_id": "rule:admin_only",
    "get_port:binding:profile": "rule:admin_only",
    "update_port": "rule:admin_or_owner or rule:context_is_advsvc",
    "update_port:device_owner": "not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner",
    "update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
    "update_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner",
    "update_port:port_security_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
    "update_port:binding:host_id": "rule:admin_only",
    "update_port:binding:profile": "rule:admin_only",
    "update_port:mac_learning_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
    "update_port:allowed_address_pairs": "rule:admin_or_network_owner",
    "delete_port": "rule:context_is_advsvc or rule:admin_owner_or_network_owner",

    "get_router:ha": "rule:admin_only",
    "create_router": "rule:regular_user",
    "create_router:external_gateway_info:enable_snat": "rule:admin_only",
    "create_router:distributed": "rule:admin_only",
    "create_router:ha": "rule:admin_only",
    "get_router": "rule:admin_or_owner",
    "get_router:distributed": "rule:admin_only",
    "update_router:external_gateway_info:enable_snat": "rule:admin_only",
    "update_router:distributed": "rule:admin_only",
    "update_router:ha": "rule:admin_only",
    "delete_router": "rule:admin_or_owner",

    "add_router_interface": "rule:admin_or_owner",
    "remove_router_interface": "rule:admin_or_owner",

    "create_router:external_gateway_info:external_fixed_ips": "rule:admin_only",
    "update_router:external_gateway_info:external_fixed_ips": "rule:admin_only",

    "insert_rule": "rule:admin_or_owner",
    "remove_rule": "rule:admin_or_owner",

    "create_qos_queue": "rule:admin_only",
    "get_qos_queue": "rule:admin_only",

    "update_agent": "rule:admin_only",
    "delete_agent": "rule:admin_only",
    "get_agent": "rule:admin_only",

    "create_dhcp-network": "rule:admin_only",
    "delete_dhcp-network": "rule:admin_only",
    "get_dhcp-networks": "rule:admin_only",
    "create_l3-router": "rule:admin_only",
    "delete_l3-router": "rule:admin_only",
    "get_l3-routers": "rule:admin_only",
    "get_dhcp-agents": "rule:admin_only",
    "get_l3-agents": "rule:admin_only",
    "get_loadbalancer-agent": "rule:admin_only",
    "get_loadbalancer-pools": "rule:admin_only",
    "get_agent-loadbalancers": "rule:admin_only",
    "get_loadbalancer-hosting-agent": "rule:admin_only",

    "create_floatingip": "rule:regular_user",
    "create_floatingip:floating_ip_address": "rule:admin_only",
    "update_floatingip": "rule:admin_or_owner",
    "delete_floatingip": "rule:admin_or_owner",
    "get_floatingip": "rule:admin_or_owner",

    "create_network_profile": "rule:admin_only",
    "update_network_profile": "rule:admin_only",
    "delete_network_profile": "rule:admin_only",
    "get_network_profiles": "",
    "get_network_profile": "",
    "update_policy_profiles": "rule:admin_only",
    "get_policy_profiles": "",
    "get_policy_profile": "",

    "create_metering_label": "rule:admin_only",
    "delete_metering_label": "rule:admin_only",
    "get_metering_label": "rule:admin_only",

    "create_metering_label_rule": "rule:admin_only",
    "delete_metering_label_rule": "rule:admin_only",
    "get_metering_label_rule": "rule:admin_only",

    "get_service_provider": "rule:regular_user",
    "get_lsn": "rule:admin_only",
    "create_lsn": "rule:admin_only",

    "create_flavor": "rule:admin_only",
    "update_flavor": "rule:admin_only",
    "delete_flavor": "rule:admin_only",
    "get_flavors": "rule:regular_user",
    "get_flavor": "rule:regular_user",
    "create_service_profile": "rule:admin_only",
    "update_service_profile": "rule:admin_only",
    "delete_service_profile": "rule:admin_only",
    "get_service_profiles": "rule:admin_only",
    "get_service_profile": "rule:admin_only",

    "get_policy": "rule:regular_user",
    "create_policy": "rule:admin_only",
    "update_policy": "rule:admin_only",
    "delete_policy": "rule:admin_only",
    "get_policy_bandwidth_limit_rule": "rule:regular_user",
    "create_policy_bandwidth_limit_rule": "rule:admin_only",
    "delete_policy_bandwidth_limit_rule": "rule:admin_only",
    "update_policy_bandwidth_limit_rule": "rule:admin_only",
    "get_policy_dscp_marking_rule": "rule:regular_user",
    "create_policy_dscp_marking_rule": "rule:admin_only",
    "delete_policy_dscp_marking_rule": "rule:admin_only",
    "update_policy_dscp_marking_rule": "rule:admin_only",
    "get_rule_type": "rule:regular_user",
    "get_policy_minimum_bandwidth_rule": "rule:regular_user",
    "create_policy_minimum_bandwidth_rule": "rule:admin_only",
    "delete_policy_minimum_bandwidth_rule": "rule:admin_only",
    "update_policy_minimum_bandwidth_rule": "rule:admin_only",

    "restrict_wildcard": "(not field:rbac_policy:target_tenant=*) or rule:admin_only",
    "create_rbac_policy": "",
    "create_rbac_policy:target_tenant": "rule:restrict_wildcard",
    "update_rbac_policy": "rule:admin_or_owner",
    "update_rbac_policy:target_tenant": "rule:restrict_wildcard and rule:admin_or_owner",
    "get_rbac_policy": "rule:admin_or_owner",
    "delete_rbac_policy": "rule:admin_or_owner",

    "create_flavor_service_profile": "rule:admin_only",
    "delete_flavor_service_profile": "rule:admin_only",
    "get_flavor_service_profile": "rule:regular_user",
    "get_auto_allocated_topology": "rule:admin_or_owner",

    "create_trunk": "rule:regular_user",
    "get_trunk": "rule:admin_or_owner",
    "delete_trunk": "rule:admin_or_owner",
    "get_subports": "",
    "add_subports": "rule:admin_or_owner",
    "remove_subports": "rule:admin_or_owner"
}
rootwrap.conf

The rootwrap.conf file contains configuration for system utilities that require privilege escalation to execute.

# Configuration for neutron-rootwrap
# This file should be owned by (and only-writeable by) the root user

[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap

# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin

# Enable logging to syslog
# Default value is False
use_syslog=False

# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog

# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR

[xenapi]
# XenAPI configuration is only required by the L2 agent if it is to
# target a XenServer/XCP compute host's dom0.
xenapi_connection_url=<None>
xenapi_connection_username=root
xenapi_connection_password=<None>
Reference architecture plug-ins and agents

Although the Networking service supports other plug-ins and agents, this guide only contains configuration files for the following reference architecture components:

  • ML2 plug-in
  • Layer-2 agents
    • Open vSwitch (OVS)
    • Linux bridge
    • Single-root I/O virtualization (SR-IOV)
  • DHCP agent
  • Layer-3 (routing) agent
  • Metadata agent
  • Metering agent
ml2_conf.ini

The plugins/ml2/ml2_conf.ini file contains configuration for the ML2 plug-in.

[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[ml2]

#
# From neutron.ml2
#

# List of network type driver entrypoints to be loaded from the
# neutron.ml2.type_drivers namespace. (list value)
#type_drivers = local,flat,vlan,gre,vxlan,geneve

# Ordered list of network_types to allocate as tenant networks. The default
# value 'local' is useful for single-box testing but provides no connectivity
# between hosts. (list value)
#tenant_network_types = local

# An ordered list of networking mechanism driver entrypoints to be loaded from
# the neutron.ml2.mechanism_drivers namespace. (list value)
#mechanism_drivers =

# An ordered list of extension driver entrypoints to be loaded from the
# neutron.ml2.extension_drivers namespace. For example: extension_drivers =
# port_security,qos (list value)
#extension_drivers =

# Maximum size of an IP packet (MTU) that can traverse the underlying physical
# network infrastructure without fragmentation when using an overlay/tunnel
# protocol. This option allows specifying a physical network MTU value that
# differs from the default global_physnet_mtu value. (integer value)
#path_mtu = 0

# A list of mappings of physical networks to MTU values. The format of the
# mapping is <physnet>:<mtu val>. This mapping allows specifying a physical
# network MTU value that differs from the default global_physnet_mtu value.
# (list value)
#physical_network_mtus =

# Default network type for external networks when no provider attributes are
# specified. By default it is None, which means that if provider attributes are
# not specified while creating external networks then they will have the same
# type as tenant networks. Allowed values for external_network_type config
# option depend on the network type values configured in type_drivers config
# option. (string value)
#external_network_type = <None>

# IP version of all overlay (tunnel) network endpoints. Use a value of 4 for
# IPv4 or 6 for IPv6. (integer value)
#overlay_ip_version = 4


[ml2_type_flat]

#
# From neutron.ml2
#

# List of physical_network names with which flat networks can be created. Use
# default '*' to allow flat networks with arbitrary physical_network names. Use
# an empty list to disable flat networks. (list value)
#flat_networks = *


[ml2_type_geneve]

#
# From neutron.ml2
#

# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of
# Geneve VNI IDs that are available for tenant network allocation (list value)
#vni_ranges =

# Geneve encapsulation header size is dynamic, this value is used to calculate
# the maximum MTU for the driver. This is the sum of the sizes of the outer ETH
# + IP + UDP + GENEVE header sizes. The default size for this field is 50,
# which is the size of the Geneve header without any additional option headers.
# (integer value)
#max_header_size = 30


[ml2_type_gre]

#
# From neutron.ml2
#

# Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE
# tunnel IDs that are available for tenant network allocation (list value)
#tunnel_id_ranges =


[ml2_type_vlan]

#
# From neutron.ml2
#

# List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network>
# specifying physical_network names usable for VLAN provider and tenant
# networks, as well as ranges of VLAN tags on each available for allocation to
# tenant networks. (list value)
#network_vlan_ranges =


[ml2_type_vxlan]

#
# From neutron.ml2
#

# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of
# VXLAN VNI IDs that are available for tenant network allocation (list value)
#vni_ranges =

# Multicast group for VXLAN. When configured, will enable sending all broadcast
# traffic to this multicast group. When left unconfigured, will disable
# multicast VXLAN mode. (string value)
#vxlan_group = <None>


[securitygroup]

#
# From neutron.ml2
#

# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>

# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true

# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
ml2_conf_sriov.ini

The plugins/ml2/ml2_conf_sriov.ini file contains configuration for the ML2 plug-in specific to SR-IOV.

[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[ml2_sriov]

#
# From neutron.ml2.sriov
#

# DEPRECATED: Comma-separated list of supported PCI vendor devices, as defined
# by vendor_id:product_id according to the PCI ID Repository. Default None
# accept all PCI vendor devicesDEPRECATED: This option is deprecated in the
# Newton release and will be removed in the Ocata release. Starting from Ocata
# the mechanism driver will accept all PCI vendor devices. (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#supported_pci_vendor_devs = <None>
linuxbridge_agent.ini

The plugins/ml2/linuxbridge_agent.ini file contains configuration for the Linux bridge layer-2 agent.

[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[agent]

#
# From neutron.ml2.linuxbridge.agent
#

# The number of seconds the agent will wait between polling for local device
# changes. (integer value)
#polling_interval = 2

# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
# value is set to 0, rpc timeout won't be changed (integer value)
#quitting_rpc_timeout = 10

# DEPRECATED: Enable suppression of ARP responses that don't match an IP
# address that belongs to the port from which they originate. Note: This
# prevents the VMs attached to this agent from spoofing, it doesn't protect
# them from other devices which have the capability to spoof (e.g. bare metal
# or VMs attached to agents without this flag set to True). Spoofing rules will
# not be added to any ports that have port security disabled. For LinuxBridge,
# this requires ebtables. For OVS, it requires a version that supports matching
# ARP headers. This option will be removed in Ocata so the only way to disable
# protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true

# Extensions list to use (list value)
#extensions =


[linux_bridge]

#
# From neutron.ml2.linuxbridge.agent
#

# Comma-separated list of <physical_network>:<physical_interface> tuples
# mapping physical network names to the agent's node-specific physical network
# interfaces to be used for flat and VLAN networks. All physical networks
# listed in network_vlan_ranges on the server should have mappings to
# appropriate interfaces on each agent. (list value)
#physical_interface_mappings =

# List of <physical_network>:<physical_bridge> (list value)
#bridge_mappings =


[securitygroup]

#
# From neutron.ml2.linuxbridge.agent
#

# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>

# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true

# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true


[vxlan]

#
# From neutron.ml2.linuxbridge.agent
#

# Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin
# using linuxbridge mechanism driver (boolean value)
#enable_vxlan = true

# TTL for vxlan interface protocol packets. (integer value)
#ttl = <None>

# TOS for vxlan interface protocol packets. (integer value)
#tos = <None>

# Multicast group(s) for vxlan interface. A range of group addresses may be
# specified by using CIDR notation. Specifying a range allows different VNIs to
# use different group addresses, reducing or eliminating spurious broadcast
# traffic to the tunnel endpoints. To reserve a unique group for each possible
# (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on
# all the agents. (string value)
#vxlan_group = 224.0.0.1

# IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or
# IPv6 address that resides on one of the host network interfaces. The IP
# version of this value must match the value of the 'overlay_ip_version' option
# in the ML2 plug-in configuration file on the neutron server node(s). (IP
# address value)
#local_ip = <None>

# Extension to use alongside ml2 plugin's l2population mechanism driver. It
# enables the plugin to populate VXLAN forwarding table. (boolean value)
#l2_population = false

# Enable local ARP responder which provides local responses instead of
# performing ARP broadcast into the overlay. Enabling local ARP responder is
# not fully compatible with the allowed-address-pairs extension. (boolean
# value)
#arp_responder = false
sriov_agent.ini

The plugins/ml2/sriov_agent.ini file contains configuration for the SR-IOV layer-2 agent.

[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[agent]

#
# From neutron.ml2.sriov.agent
#

# Extensions list to use (list value)
#extensions =


[sriov_nic]

#
# From neutron.ml2.sriov.agent
#

# Comma-separated list of <physical_network>:<network_device> tuples mapping
# physical network names to the agent's node-specific physical network device
# interfaces of SR-IOV physical function to be used for VLAN networks. All
# physical networks listed in network_vlan_ranges on the server should have
# mappings to appropriate interfaces on each agent. (list value)
#physical_device_mappings =

# Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping
# network_device to the agent's node-specific list of virtual functions that
# should not be used for virtual networking. vfs_to_exclude is a semicolon-
# separated list of virtual functions to exclude from network_device. The
# network_device in the mapping should appear in the physical_device_mappings
# list. (list value)
#exclude_devices =
openvswitch_agent.ini

The plugins/ml2/openvswitch_agent.ini file contains configuration for the Open vSwitch (OVS) layer-2 agent.

[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[agent]

#
# From neutron.ml2.ovs.agent
#

# The number of seconds the agent will wait between polling for local device
# changes. (integer value)
#polling_interval = 2

# Minimize polling by monitoring ovsdb for interface changes. (boolean value)
#minimize_polling = true

# The number of seconds to wait before respawning the ovsdb monitor after
# losing communication with it. (integer value)
#ovsdb_monitor_respawn_interval = 30

# Network types supported by the agent (gre and/or vxlan). (list value)
#tunnel_types =

# The UDP port to use for VXLAN tunnels. (port value)
# Minimum value: 0
# Maximum value: 65535
#vxlan_udp_port = 4789

# MTU size of veth interfaces (integer value)
#veth_mtu = 9000

# Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve
# tunnel scalability. (boolean value)
#l2_population = false

# Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2
# l2population driver. Allows the switch (when supporting an overlay) to
# respond to an ARP request locally without performing a costly ARP broadcast
# into the overlay. (boolean value)
#arp_responder = false

# DEPRECATED: Enable suppression of ARP responses that don't match an IP
# address that belongs to the port from which they originate. Note: This
# prevents the VMs attached to this agent from spoofing, it doesn't protect
# them from other devices which have the capability to spoof (e.g. bare metal
# or VMs attached to agents without this flag set to True). Spoofing rules will
# not be added to any ports that have port security disabled. For LinuxBridge,
# this requires ebtables. For OVS, it requires a version that supports matching
# ARP headers. This option will be removed in Ocata so the only way to disable
# protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true

# Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying
# GRE/VXLAN tunnel. (boolean value)
#dont_fragment = true

# Make the l2 agent run in DVR mode. (boolean value)
#enable_distributed_routing = false

# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
# value is set to 0, rpc timeout won't be changed (integer value)
#quitting_rpc_timeout = 10

# Reset flow table on start. Setting this to True will cause brief traffic
# interruption. (boolean value)
#drop_flows_on_start = false

# Set or un-set the tunnel header checksum  on outgoing IP packet carrying
# GRE/VXLAN tunnel. (boolean value)
#tunnel_csum = false

# DEPRECATED: Selects the Agent Type reported (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#agent_type = Open vSwitch agent

# Extensions list to use (list value)
#extensions =


[ovs]

#
# From neutron.ml2.ovs.agent
#

# Integration bridge to use. Do not change this parameter unless you have a
# good reason to. This is the name of the OVS integration bridge. There is one
# per hypervisor. The integration bridge acts as a virtual 'patch bay'. All VM
# VIFs are attached to this bridge and then 'patched' according to their
# network connectivity. (string value)
#integration_bridge = br-int

# Tunnel bridge to use. (string value)
#tunnel_bridge = br-tun

# Peer patch port in integration bridge for tunnel bridge. (string value)
#int_peer_patch_port = patch-tun

# Peer patch port in tunnel bridge for integration bridge. (string value)
#tun_peer_patch_port = patch-int

# IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or
# IPv6 address that resides on one of the host network interfaces. The IP
# version of this value must match the value of the 'overlay_ip_version' option
# in the ML2 plug-in configuration file on the neutron server node(s). (IP
# address value)
#local_ip = <None>

# Comma-separated list of <physical_network>:<bridge> tuples mapping physical
# network names to the agent's node-specific Open vSwitch bridge names to be
# used for flat and VLAN networks. The length of bridge names should be no more
# than 11. Each bridge must exist, and should have a physical network interface
# configured as a port. All physical networks configured on the server should
# have mappings to appropriate bridges on each agent. Note: If you remove a
# bridge from this mapping, make sure to disconnect it from the integration
# bridge as it won't be managed by the agent anymore. (list value)
#bridge_mappings =

# Use veths instead of patch ports to interconnect the integration bridge to
# physical networks. Support kernel without Open vSwitch patch port support so
# long as it is set to True. (boolean value)
#use_veth_interconnection = false

# OpenFlow interface to use. (string value)
# Allowed values: ovs-ofctl, native
#of_interface = native

# OVS datapath to use. 'system' is the default value and corresponds to the
# kernel datapath. To enable the userspace datapath set this value to 'netdev'.
# (string value)
# Allowed values: system, netdev
#datapath_type = system

# OVS vhost-user socket directory. (string value)
#vhostuser_socket_dir = /var/run/openvswitch

# Address to listen on for OpenFlow connections. Used only for 'native' driver.
# (IP address value)
#of_listen_address = 127.0.0.1

# Port to listen on for OpenFlow connections. Used only for 'native' driver.
# (port value)
# Minimum value: 0
# Maximum value: 65535
#of_listen_port = 6633

# Timeout in seconds to wait for the local switch connecting the controller.
# Used only for 'native' driver. (integer value)
#of_connect_timeout = 30

# Timeout in seconds to wait for a single OpenFlow request. Used only for
# 'native' driver. (integer value)
#of_request_timeout = 10

# The interface for interacting with the OVSDB (string value)
# Allowed values: native, vsctl
#ovsdb_interface = native

# The connection string for the native OVSDB backend. Requires the native
# ovsdb_interface to be enabled. (string value)
#ovsdb_connection = tcp:127.0.0.1:6640


[securitygroup]

#
# From neutron.ml2.ovs.agent
#

# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>

# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true

# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
dhcp_agent.ini

The dhcp_agent.ini file contains configuration for the DHCP agent.

[DEFAULT]

#
# From neutron.base.agent
#

# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int

# Uses veth for an OVS interface or not. Support kernels with limited namespace
# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
# value)
#ovs_use_veth = false

# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>

# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs
# commands will fail with ALARMCLOCK error. (integer value)
#ovs_vsctl_timeout = 10

#
# From neutron.dhcp.agent
#

# The DHCP agent will resync its state with Neutron to recover from any
# transient notification or RPC errors. The interval is number of seconds
# between attempts. (integer value)
#resync_interval = 5

# The driver used to manage the DHCP server. (string value)
#dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

# The DHCP server can assist with providing metadata support on isolated
# networks. Setting this value to True will cause the DHCP server to append
# specific host routes to the DHCP request. The metadata service will only be
# activated when the subnet does not contain any router port. The guest
# instance must be configured to request host routes via DHCP (Option 121).
# This option doesn't have any effect when force_metadata is set to True.
# (boolean value)
#enable_isolated_metadata = false

# In some cases the Neutron router is not present to provide the metadata IP
# but the DHCP server can be used to provide this info. Setting this value will
# force the DHCP server to append specific host routes to the DHCP request. If
# this option is set, then the metadata service will be activated for all the
# networks. (boolean value)
#force_metadata = false

# Allows for serving metadata requests coming from a dedicated metadata access
# network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected
# to a Neutron router from which the VMs send metadata:1 request. In this case
# DHCP Option 121 will not be injected in VMs, as they will be able to reach
# 169.254.169.254 through a router. This option requires
# enable_isolated_metadata = True. (boolean value)
#enable_metadata_network = false

# Number of threads to use during sync process. Should not exceed connection
# pool size configured on server. (integer value)
#num_sync_threads = 4

# Location to store DHCP server config files. (string value)
#dhcp_confs = $state_path/dhcp

# DEPRECATED: Domain to use for building the hostnames. This option is
# deprecated. It has been moved to neutron.conf as dns_domain. It will be
# removed in a future release. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#dhcp_domain = openstacklocal

# Override the default dnsmasq settings with this file. (string value)
#dnsmasq_config_file =

# Comma-separated list of the DNS servers which will be used as forwarders.
# (list value)
#dnsmasq_dns_servers =

# Base log dir for dnsmasq logging. The log contains DHCP and DNS log
# information and is useful for debugging issues with either DHCP or DNS. If
# this section is null, disable dnsmasq log. (string value)
#dnsmasq_base_log_dir = <None>

# Enables the dnsmasq service to provide name resolution for instances via DNS
# resolvers on the host running the DHCP agent. Effectively removes the '--no-
# resolv' option from the dnsmasq process arguments. Adding custom DNS
# resolvers to the 'dnsmasq_dns_servers' option disables this feature. (boolean
# value)
#dnsmasq_local_resolv = false

# Limit number of leases to prevent a denial-of-service. (integer value)
#dnsmasq_lease_max = 16777216

# Use broadcast in DHCP replies. (boolean value)
#dhcp_broadcast_reply = false

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[AGENT]

#
# From neutron.base.agent
#

# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30

# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false

# Availability zone of this node (string value)
#availability_zone = nova
l3_agent.ini

The l3_agent.ini file contains configuration for the Layer-3 (routing) agent.

[DEFAULT]

#
# From neutron.base.agent
#

# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int

# Uses veth for an OVS interface or not. Support kernels with limited namespace
# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
# value)
#ovs_use_veth = false

# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>

# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs
# commands will fail with ALARMCLOCK error. (integer value)
#ovs_vsctl_timeout = 10

#
# From neutron.l3.agent
#

# The working mode for the agent. Allowed modes are: 'legacy' - this preserves
# the existing behavior where the L3 agent is deployed on a centralized
# networking node to provide L3 services like DNAT, and SNAT. Use this mode if
# you do not want to adopt DVR. 'dvr' - this mode enables DVR functionality and
# must be used for an L3 agent that runs on a compute host. 'dvr_snat' - this
# enables centralized SNAT support in conjunction with DVR.  This mode must be
# used for an L3 agent running on a centralized node (or in single-host
# deployments, e.g. devstack) (string value)
# Allowed values: dvr, dvr_snat, legacy
#agent_mode = legacy

# TCP Port used by Neutron metadata namespace proxy. (port value)
# Minimum value: 0
# Maximum value: 65535
#metadata_port = 9697

# Send this many gratuitous ARPs for HA setup, if less than or equal to 0, the
# feature is disabled (integer value)
#send_arp_for_ha = 3

# Indicates that this L3 agent should also handle routers that do not have an
# external network gateway configured. This option should be True only for a
# single agent in a Neutron deployment, and may be False for all agents if all
# routers must have an external network gateway. (boolean value)
#handle_internal_only_routers = true

# When external_network_bridge is set, each L3 agent can be associated with no
# more than one external network. This value should be set to the UUID of that
# external network. To allow L3 agent support multiple external networks, both
# the external_network_bridge and gateway_external_network_id must be left
# empty. (string value)
#gateway_external_network_id =

# With IPv6, the network used for the external gateway does not need to have an
# associated subnet, since the automatically assigned link-local address (LLA)
# can be used. However, an IPv6 gateway address is needed for use as the next-
# hop for the default route. If no IPv6 gateway address is configured here,
# (and only then) the neutron router will be configured to get its default
# route from router advertisements (RAs) from the upstream router; in which
# case the upstream router must also be configured to send these RAs. The
# ipv6_gateway, when configured, should be the LLA of the interface on the
# upstream router. If a next-hop using a global unique address (GUA) is
# desired, it needs to be done via a subnet allocated to the network and not
# through this parameter.  (string value)
#ipv6_gateway =

# Driver used for ipv6 prefix delegation. This needs to be an entry point
# defined in the neutron.agent.linux.pd_drivers namespace. See setup.cfg for
# entry points included with the neutron source. (string value)
#prefix_delegation_driver = dibbler

# Allow running metadata proxy. (boolean value)
#enable_metadata_proxy = true

# Iptables mangle mark used to mark metadata valid requests. This mark will be
# masked with 0xffff so that only the lower 16 bits will be used. (string
# value)
#metadata_access_mark = 0x1

# Iptables mangle mark used to mark ingress from external network. This mark
# will be masked with 0xffff so that only the lower 16 bits will be used.
# (string value)
#external_ingress_mark = 0x2

# DEPRECATED: Name of bridge used for external network traffic. When this
# parameter is set, the L3 agent will plug an interface directly into an
# external bridge which will not allow any wiring by the L2 agent. Using this
# will result in incorrect port statuses. This option is deprecated and will be
# removed in Ocata. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#external_network_bridge =

# Seconds between running periodic tasks. (integer value)
#periodic_interval = 40

# Number of separate API worker processes for service. If not specified, the
# default is equal to the number of CPUs available for best performance.
# (integer value)
#api_workers = <None>

# Number of RPC worker processes for service. (integer value)
#rpc_workers = 1

# Number of RPC worker processes dedicated to state reports queue. (integer
# value)
#rpc_state_report_workers = 1

# Range of seconds to randomly delay when starting the periodic task scheduler
# to reduce stampeding. (Disable by setting to 0) (integer value)
#periodic_fuzzy_delay = 5

# Location to store keepalived/conntrackd config files (string value)
#ha_confs_path = $state_path/ha_confs

# VRRP authentication type (string value)
# Allowed values: AH, PASS
#ha_vrrp_auth_type = PASS

# VRRP authentication password (string value)
#ha_vrrp_auth_password = <None>

# The advertisement interval in seconds (integer value)
#ha_vrrp_advert_int = 2

# Service to handle DHCPv6 Prefix delegation. (string value)
#pd_dhcp_driver = dibbler

# Location to store IPv6 RA config files (string value)
#ra_confs = $state_path/ra

# MinRtrAdvInterval setting for radvd.conf (integer value)
#min_rtr_adv_interval = 30

# MaxRtrAdvInterval setting for radvd.conf (integer value)
#max_rtr_adv_interval = 100

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[AGENT]

#
# From neutron.base.agent
#

# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30

# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false

# Availability zone of this node (string value)
#availability_zone = nova
macvtap_agent.ini

The macvtap_agent.ini file contains configuration for the macvtap agent.

[DEFAULT]

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[agent]

#
# From neutron.ml2.macvtap.agent
#

# The number of seconds the agent will wait between polling for local device
# changes. (integer value)
#polling_interval = 2

# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
# value is set to 0, rpc timeout won't be changed (integer value)
#quitting_rpc_timeout = 10

# DEPRECATED: Enable suppression of ARP responses that don't match an IP
# address that belongs to the port from which they originate. Note: This
# prevents the VMs attached to this agent from spoofing, it doesn't protect
# them from other devices which have the capability to spoof (e.g. bare metal
# or VMs attached to agents without this flag set to True). Spoofing rules will
# not be added to any ports that have port security disabled. For LinuxBridge,
# this requires ebtables. For OVS, it requires a version that supports matching
# ARP headers. This option will be removed in Ocata so the only way to disable
# protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true


[macvtap]

#
# From neutron.ml2.macvtap.agent
#

# Comma-separated list of <physical_network>:<physical_interface> tuples
# mapping physical network names to the agent's node-specific physical network
# interfaces to be used for flat and VLAN networks. All physical networks
# listed in network_vlan_ranges on the server should have mappings to
# appropriate interfaces on each agent. (list value)
#physical_interface_mappings =


[securitygroup]

#
# From neutron.ml2.macvtap.agent
#

# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>

# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true

# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
metadata_agent.ini

The metadata_agent.ini file contains configuration for the metadata agent.

[DEFAULT]

#
# From neutron.metadata.agent
#

# Location for Metadata Proxy UNIX domain socket. (string value)
#metadata_proxy_socket = $state_path/metadata_proxy

# User (uid or name) running metadata proxy after its initialization (if empty:
# agent effective user). (string value)
#metadata_proxy_user =

# Group (gid or name) running metadata proxy after its initialization (if
# empty: agent effective group). (string value)
#metadata_proxy_group =

# Certificate Authority public key (CA cert) file for ssl (string value)
#auth_ca_cert = <None>

# IP address used by Nova metadata server. (string value)
#nova_metadata_ip = 127.0.0.1

# TCP Port used by Nova metadata server. (port value)
# Minimum value: 0
# Maximum value: 65535
#nova_metadata_port = 8775

# When proxying metadata requests, Neutron signs the Instance-ID header with a
# shared secret to prevent spoofing. You may select any string for a secret,
# but it must match here and in the configuration used by the Nova Metadata
# Server. NOTE: Nova uses the same config key, but in [neutron] section.
# (string value)
#metadata_proxy_shared_secret =

# Protocol to access nova metadata, http or https (string value)
# Allowed values: http, https
#nova_metadata_protocol = http

# Allow to perform insecure SSL (https) requests to nova metadata (boolean
# value)
#nova_metadata_insecure = false

# Client certificate for nova metadata api server. (string value)
#nova_client_cert =

# Private key of client certificate. (string value)
#nova_client_priv_key =

# Metadata Proxy UNIX domain socket mode, 4 values allowed: 'deduce': deduce
# mode from metadata_proxy_user/group values, 'user': set metadata proxy socket
# mode to 0o644, to use when metadata_proxy_user is agent effective user or
# root, 'group': set metadata proxy socket mode to 0o664, to use when
# metadata_proxy_group is agent effective group or root, 'all': set metadata
# proxy socket mode to 0o666, to use otherwise. (string value)
# Allowed values: deduce, user, group, all
#metadata_proxy_socket_mode = deduce

# Number of separate worker processes for metadata server (defaults to half of
# the number of CPUs) (integer value)
#metadata_workers = 1

# Number of backlog requests to configure the metadata server socket with
# (integer value)
#metadata_backlog = 4096

# DEPRECATED: URL to connect to the cache back end. This option is deprecated
# in the Newton release and will be removed. Please add a [cache] group for
# oslo.cache in your neutron.conf and add "enable" and "backend" options in
# this section. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#cache_url =

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false


[AGENT]

#
# From neutron.metadata.agent
#

# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30

# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false


[cache]

#
# From oslo.cache
#

# Prefix for building the configuration dictionary for the cache region. This
# should not need to be changed unless there is another dogpile.cache region
# with the same configuration name. (string value)
#config_prefix = cache.oslo

# Default TTL, in seconds, for any cached item in the dogpile.cache region.
# This applies to any cached method that doesn't have an explicit cache
# expiration time defined for it. (integer value)
#expiration_time = 600

# Dogpile.cache backend module. It is recommended that Memcache or Redis
# (dogpile.cache.redis) be used in production deployments. For eventlet-based
# or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool)
# is recommended. For low thread servers, dogpile.cache.memcached is
# recommended. Test environments with a single instance of the server can use
# the dogpile.cache.memory backend. (string value)
#backend = dogpile.cache.null

# Arguments supplied to the backend module. Specify this option once per
# argument to be passed to the dogpile.cache backend. Example format:
# "<argname>:<value>". (multi valued)
#backend_argument =

# Proxy classes to import that will affect the way the dogpile.cache backend
# functions. See the dogpile.cache documentation on changing-backend-behavior.
# (list value)
#proxies =

# Global toggle for caching. (boolean value)
#enabled = false

# Extra debugging from the cache backend (cache keys, get/set/delete/etc
# calls). This is only really useful if you need to see the specific cache-
# backend get/set/delete calls with the keys/values.  Typically this should be
# left set to false. (boolean value)
#debug_cache_backend = false

# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (list value)
#memcache_servers = localhost:11211

# Number of seconds memcached server is considered dead before it is tried
# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
# (integer value)
#memcache_dead_retry = 300

# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (integer value)
#memcache_socket_timeout = 3

# Max total number of open connections to every memcached server.
# (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_maxsize = 10

# Number of seconds a connection to memcached is held unused in the pool before
# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_unused_timeout = 60

# Number of seconds that an operation will wait to get a memcache client
# connection. (integer value)
#memcache_pool_connection_get_timeout = 10
metering_agent.ini

The metering_agent.ini file contains configuration for the metering agent.

[DEFAULT]

#
# From neutron.metering.agent
#

# Metering driver (string value)
#driver = neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver

# Interval between two metering measures (integer value)
#measure_interval = 30

# Interval between two metering reports (integer value)
#report_interval = 300

# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

Networking advanced services configuration files

The Networking advanced services such as Load-Balancer-as-a-Service (LBaaS), Firewall-as-a-Service (FWaaS), and VPN-as-a-Service (VPNaaS) implement the automatic generation of configuration files. Here are the sample configuration files and you can generate the latest configuration files by running the generate_config_file_samples.sh script provided by each LBaaS, FWaaS, and VPNaaS services on their root directory.

Load-Balancer-as-a-Service (LBaaS)
neutron_lbaas.conf
[DEFAULT]

#
# From neutron.lbaas
#

# Driver to use for scheduling to a default loadbalancer agent (string value)
#loadbalancer_scheduler_driver = neutron_lbaas.agent_scheduler.ChanceScheduler


[certificates]

#
# From neutron.lbaas
#

# Certificate Manager plugin. Defaults to barbican. (string value)
#cert_manager_type = barbican

# Name of the Barbican authentication method to use (string value)
#barbican_auth = barbican_acl_auth

# Absolute path to the certificate storage directory. Defaults to
# env[OS_LBAAS_TLS_STORAGE]. (string value)
#storage_path = /var/lib/neutron-lbaas/certificates/


[quotas]

#
# From neutron.lbaas
#

# Number of LoadBalancers allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_loadbalancer = 10

# Number of Loadbalancer Listeners allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_listener = -1

# Number of pools allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_pool = 10

# Number of pool members allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_member = -1

# Number of health monitors allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_healthmonitor = -1


[service_auth]

#
# From neutron.lbaas
#

# Authentication endpoint (string value)
#auth_url = http://127.0.0.1:5000/v2.0

# The service admin user name (string value)
#admin_user = admin

# The service admin tenant name (string value)
#admin_tenant_name = admin

# The service admin password (string value)
#admin_password = password

# The admin user domain name (string value)
#admin_user_domain = admin

# The admin project domain name (string value)
#admin_project_domain = admin

# The deployment region (string value)
#region = RegionOne

# The name of the service (string value)
#service_name = lbaas

# The auth version used to authenticate (string value)
#auth_version = 2

# The endpoint_type to be used (string value)
#endpoint_type = public

# Disable server certificate verification (boolean value)
#insecure = false


[service_providers]

#
# From neutron.lbaas
#

# Defines providers for advanced services using the format:
# <service_type>:<name>:<driver>[:default] (multi valued)
#service_provider =
lbaas_agent.ini
[DEFAULT]

#
# From neutron.lbaas.agent
#

# Seconds between periodic task runs (integer value)
#periodic_interval = 10

# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int

# Uses veth for an OVS interface or not. Support kernels with limited namespace
# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
# value)
#ovs_use_veth = false

# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
services_lbaas.conf
[DEFAULT]


[haproxy]

#
# From neutron.lbaas.service
#

# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>

# Seconds between periodic task runs (integer value)
#periodic_interval = 10


[octavia]

#
# From neutron.lbaas.service
#

# URL of Octavia controller root (string value)
#base_url = http://127.0.0.1:9876

# Interval in seconds to poll octavia when an entity is created, updated, or
# deleted. (integer value)
#request_poll_interval = 3

# Time to stop polling octavia when a status of an entity does not change.
# (integer value)
#request_poll_timeout = 100

# True if Octavia will be responsible for allocating the VIP. False if neutron-
# lbaas will allocate it and pass to Octavia. (boolean value)
#allocates_vip = false


[radwarev2]

#
# From neutron.lbaas.service
#

# IP address of vDirect server. (string value)
#vdirect_address = <None>

# IP address of secondary vDirect server. (string value)
#ha_secondary_address = <None>

# vDirect user name. (string value)
#vdirect_user = vDirect

# vDirect user password. (string value)
#vdirect_password = radware

# Service ADC type. Default: VA. (string value)
#service_adc_type = VA

# Service ADC version. (string value)
#service_adc_version =

# Enables or disables the Service HA pair. Default: False. (boolean value)
#service_ha_pair = false

# Service throughput. Default: 1000. (integer value)
#service_throughput = 1000

# Service SSL throughput. Default: 100. (integer value)
#service_ssl_throughput = 100

# Service compression throughput. Default: 100. (integer value)
#service_compression_throughput = 100

# Size of service cache. Default: 20. (integer value)
#service_cache = 20

# Resource pool IDs. (list value)
#service_resource_pool_ids =

# A required VLAN for the interswitch link to use. (integer value)
#service_isl_vlan = -1

# Enable or disable Alteon interswitch link for stateful session failover.
# Default: False. (boolean value)
#service_session_mirroring_enabled = false

# Name of the workflow template. Default: os_lb_v2. (string value)
#workflow_template_name = os_lb_v2

# Name of child workflow templates used.Default: manage_l3 (list value)
#child_workflow_template_names = manage_l3

# Parameter for l2_l3 workflow constructor. (dict value)
#workflow_params = allocate_ha_ips:True,allocate_ha_vrrp:True,data_ip_address:192.168.200.99,data_ip_mask:255.255.255.0,data_port:1,gateway:192.168.200.1,ha_ip_pool_name:default,ha_network_name:HA-Network,ha_port:2,twoleg_enabled:_REPLACE_

# Name of the workflow action. Default: apply. (string value)
#workflow_action_name = apply

# Name of the workflow action for statistics. Default: stats. (string value)
#stats_action_name = stats


[radwarev2_debug]

#
# From neutron.lbaas.service
#

# Provision ADC service? (boolean value)
#provision_service = true

# Configule ADC with L3 parameters? (boolean value)
#configure_l3 = true

# Configule ADC with L4 parameters? (boolean value)
#configure_l4 = true
VPN-as-a-Service (VPNaaS)
neutron_vpnaas.conf
[DEFAULT]


[service_providers]

#
# From neutron.vpnaas
#

# Defines providers for advanced services using the format:
# <service_type>:<name>:<driver>[:default] (multi valued)
#service_provider =
vpn_agent.ini
[DEFAULT]


[ipsec]

#
# From neutron.vpnaas.agent
#

# Location to store ipsec server config files (string value)
#config_base_dir = $state_path/ipsec

# Interval for checking ipsec status (integer value)
#ipsec_status_check_interval = 60

# Enable detail logging for ipsec pluto process. If the flag set to True, the
# detailed logging will be written into config_base_dir/<pid>/log. Note: This
# setting applies to OpenSwan and LibreSwan only. StrongSwan logs to syslog.
# (boolean value)
#enable_detailed_logging = false


[pluto]

#
# From neutron.vpnaas.agent
#

# Initial interval in seconds for checking if pluto daemon is shutdown (integer
# value)
# Deprecated group/name - [libreswan]/shutdown_check_timeout
#shutdown_check_timeout = 1

# The maximum number of retries for checking for pluto daemon shutdown (integer
# value)
# Deprecated group/name - [libreswan]/shutdown_check_retries
#shutdown_check_retries = 5

# A factor to increase the retry interval for each retry (floating point value)
# Deprecated group/name - [libreswan]/shutdown_check_back_off
#shutdown_check_back_off = 1.5

# Enable this flag to avoid from unnecessary restart (boolean value)
# Deprecated group/name - [libreswan]/restart_check_config
#restart_check_config = false


[strongswan]

#
# From neutron.vpnaas.agent
#

# Template file for ipsec configuration. (string value)
#ipsec_config_template = /home/openstack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.conf.template

# Template file for strongswan configuration. (string value)
#strongswan_config_template = /home/openstack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/strongswan.conf.template

# Template file for ipsec secret configuration. (string value)
#ipsec_secret_template = /home/openstack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.secret.template

# The area where default StrongSwan configuration files are located. (string
# value)
#default_config_area = /etc/strongswan.d


[vpnagent]

#
# From neutron.vpnaas.agent
#

# The vpn device drivers Neutron will use (multi valued)
#vpn_device_driver = neutron_vpnaas.services.vpn.device_drivers.ipsec.OpenSwanDriver, neutron_vpnaas.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver, neutron_vpnaas.services.vpn.device_drivers.vyatta_ipsec.VyattaIPSecDriver, neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.fedora_strongswan_ipsec.FedoraStrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver

New, updated, and deprecated options in Newton for Networking

New options
Option = default value (Type) Help string
[DEFAULT] cache_url = (StrOpt) URL to connect to the cache back end. This option is deprecated in the Newton release and will be removed. Please add a [cache] group for oslo.cache in your neutron.conf and add “enable” and “backend” options in this section.
[AGENT] debug_iptables_rules = False (BoolOpt) Duplicate every iptables difference calculation to ensure the format being generated matches the format of iptables-save. This option should not be turned on for production systems because it imposes a performance penalty.
[FDB] shared_physical_device_mappings = (ListOpt) Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent’s node-specific shared physical network device between SR-IOV and OVS or SR-IOV and linux bridge
[cache] backend = dogpile.cache.null (StrOpt) Dogpile.cache backend module. It is recommended that Memcache or Redis (dogpile.cache.redis) be used in production deployments. For eventlet-based or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For low thread servers, dogpile.cache.memcached is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend.
[cache] backend_argument = [] (MultiStrOpt) Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: “<argname>:<value>”.
[cache] config_prefix = cache.oslo (StrOpt) Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name.
[cache] debug_cache_backend = False (BoolOpt) Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false.
[cache] enabled = False (BoolOpt) Global toggle for caching.
[cache] expiration_time = 600 (IntOpt) Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn’t have an explicit cache expiration time defined for it.
[cache] memcache_dead_retry = 300 (IntOpt) Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
[cache] memcache_pool_connection_get_timeout = 10 (IntOpt) Number of seconds that an operation will wait to get a memcache client connection.
[cache] memcache_pool_maxsize = 10 (IntOpt) Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only).
[cache] memcache_pool_unused_timeout = 60 (IntOpt) Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only).
[cache] memcache_servers = localhost:11211 (ListOpt) Memcache servers in the format of “host:port”. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
[cache] memcache_socket_timeout = 3 (IntOpt) Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
[cache] proxies = (ListOpt) Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior.
[ml2] overlay_ip_version = 4 (IntOpt) IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6.
[profiler] connection_string = messaging:// (StrOpt) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: * messaging://: use oslo_messaging driver for sending notifications.
[profiler] enabled = False (BoolOpt) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: * True: Enables the feature * False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.
[profiler] hmac_keys = SECRET_KEY (StrOpt) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.
[profiler] trace_sqlalchemy = False (BoolOpt) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values: * True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. * False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.
New default values
Option Previous default value New default value
[DEFAULT] allow_pagination False True
[DEFAULT] allow_sorting False True
[DEFAULT] dnsmasq_dns_servers None  
[DEFAULT] external_network_bridge br-ex  
[DEFAULT] ipam_driver None internal
[OVS] of_interface ovs-ofctl native
[OVS] ovsdb_interface vsctl native
[ml2] path_mtu 1500 0
[ml2_sriov] supported_pci_vendor_devs 15b3:1004, 8086:10ca None
[ml2_type_geneve] max_header_size 50 30
Deprecated options
Deprecated option New Option
[DEFAULT] min_l3_agents_per_router None
[DEFAULT] use_syslog None
[ml2_sriov] supported_pci_vendor_devs None

This chapter explains the Networking service configuration options. For installation prerequisites, steps, and use cases, see the Installation Tutorials and Guides for your distribution (docs.openstack.org) and the OpenStack Administrator Guide.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Object Storage service

Introduction to Object Storage

Object Storage (swift) is a robust, highly scalable and fault tolerant storage platform for unstructured data such as objects. Objects are stored bits, accessed through a RESTful, HTTP-based interface. You cannot access data at the block or file level. Object Storage is commonly used to archive and back up data, with use cases in virtual machine image, photo, video, and music storage.

Object Storage provides a high degree of availability, throughput, and performance with its scale out architecture. Each object is replicated across multiple servers, residing within the same data center or across data centers, which mitigates the risk of network and hardware failure. In the event of hardware failure, Object Storage will automatically copy objects to a new location to ensure that your chosen number of copies are always available.

Object Storage also employs erasure coding. Erasure coding is a set of algorithms that allows the reconstruction of missing data from a set of original data. In theory, erasure coding uses less storage capacity with similar durability characteristics as replicas. From an application perspective, erasure coding support is transparent. Object Storage implements erasure coding as a Storage Policy.

Object Storage is an eventually consistent distributed storage platform; it sacrifices consistency for maximum availability and partition tolerance. Object Storage enables you to create a reliable platform by using commodity hardware and inexpensive storage.

For more information, review the key concepts in the developer documentation at docs.openstack.org/developer/swift/.

Object Storage general service configuration

Object Storage service uses multiple configuration files for multiple services and background daemons, and paste.deploy to manage server configurations. For more information about paste.deploy, see: http://pythonpaste.org/deploy/.

Default configuration options are set in the [DEFAULT] section, and any options specified there can be overridden in any of the other sections when the syntax set option_name = value is in place.

Configuration for servers and daemons can be expressed together in the same file for each type of server, or separately. If a required section for the service trying to start is missing, there will be an error. Sections not used by the service are ignored.

Consider the example of an Object Storage node. By convention configuration for the object-server, object-updater, object-replicator, and object-auditor exist in a single file /etc/swift/object-server.conf:

[DEFAULT]

[pipeline:main]
pipeline = object-server

[app:object-server]
use = egg:swift#object

[object-replicator]
reclaim_age = 259200

[object-updater]

[object-auditor]

Note

Default constraints can be overridden in swift.conf. For example, you can change the maximum object size and other variables.

Object Storage services expect a configuration path as the first argument:

$ swift-object-auditor
Usage: swift-object-auditor CONFIG [options]

Error: missing config path argument

If you omit the object-auditor section, this file cannot be used as the configuration path when starting the swift-object-auditor daemon:

$ swift-object-auditor /etc/swift/object-server.conf
Unable to find object-auditor config section in /etc/swift/object-server.conf

If the configuration path is a directory instead of a file, all of the files in the directory with the file extension .conf will be combined to generate the configuration object which is delivered to the Object Storage service. This is referred to generally as directory-based configuration.

Directory-based configuration leverages ConfigParser‘s native multi-file support. Files ending in .conf in the given directory are parsed in lexicographical order. File names starting with . are ignored. A mixture of file and directory configuration paths is not supported. If the configuration path is a file, only that file will be parsed.

The Object Storage service management tool swift-init has adopted the convention of looking for /etc/swift/{type}-server.conf.d/ if the file /etc/swift/{type}-server.conf file does not exist.

When using directory-based configuration, if the same option under the same section appears more than once in different files, the last value parsed is said to override previous occurrences. You can ensure proper override precedence by prefixing the files in the configuration directory with numerical values, as in the following example file layout:

/etc/swift/
    default.base
    object-server.conf.d/
        000_default.conf -> ../default.base
        001_default-override.conf
        010_server.conf
        020_replicator.conf
        030_updater.conf
        040_auditor.conf

You can inspect the resulting combined configuration object using the swift-config command-line tool.

All the services of an Object Store deployment share a common configuration in the [swift-hash] section of the /etc/swift/swift.conf file. The swift_hash_path_suffix and swift_hash_path_prefix values must be identical on all the nodes.

Description of configuration options for [swift-hash] in swift.conf
Configuration option = Default value Description
swift_hash_path_prefix = changeme A prefix used by hash_path to offer a bit more security when generating hashes for paths. It simply appends this value to all paths; if someone knows this suffix, it’s easier for them to guess the hash a path will end up with. New installations are advised to set this parameter to a random secret, which would not be disclosed ouside the organization. The same secret needs to be used by all swift servers of the same cluster. Existing installations should set this parameter to an empty string.
swift_hash_path_suffix = changeme A suffix used by hash_path to offer a bit more security when generating hashes for paths. It simply appends this value to all paths; if someone knows this suffix, it’s easier for them to guess the hash a path will end up with. New installations are advised to set this parameter to a random secret, which would not be disclosed ouside the organization. The same secret needs to be used by all swift servers of the same cluster. Existing installations should set this parameter to an empty string.

Object server configuration

Find an example object server configuration at etc/object-server.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [DEFAULT] in object-server.conf
Configuration option = Default value Description
backlog = 4096 Maximum number of allowed pending TCP connections
bind_ip = 0.0.0.0 IP Address for server to bind to
bind_port = 6000 Port for server to bind to
bind_timeout = 30 Seconds to attempt bind before giving up
client_timeout = 60 Timeout to read one chunk from a client external services
conn_timeout = 0.5 Connection timeout to external services
container_update_timeout = 1.0 Time to wait while sending a container update on object update. object server. For most cases, this should be
devices = /srv/node Parent directory of where devices are mounted
disable_fallocate = false Disable “fast fail” fallocate checks if the underlying filesystem does not support it.
disk_chunk_size = 65536 Size of chunks to read/write to disk
eventlet_debug = false If true, turn on debug logging for eventlet
expiring_objects_account_name = expiring_objects Account name for the expiring objects
expiring_objects_container_divisor = 86400 Divisor for the expiring objects container
fallocate_reserve = 0 You can set fallocate_reserve to the number of bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be
log_address = /dev/log Location where syslog sends the logs to
log_custom_handlers = `` `` Comma-separated list of functions to call to setup custom log handlers.
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_max_line_length = 0 Caps the length of log lines to the value given; no limit if set to 0, the default.
log_name = swift Label used when logging
log_statsd_default_sample_rate = 1.0 Defines the probability of sending a sample for any given event or timing measurement.
log_statsd_host = localhost If not set, the StatsD feature is disabled.
log_statsd_metric_prefix = `` `` Value will be prepended to every metric sent to the StatsD server.
log_statsd_port = 8125 Port value for the StatsD server.
log_statsd_sample_rate_factor = 1.0 Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead.
log_udp_host = `` `` If not set, the UDP receiver for syslog is disabled.
log_udp_port = 514 Port value for UDP receiver, if enabled.
max_clients = 1024 Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing other workers a chance to process it.
mount_check = true Whether or not check if the devices are mounted to prevent accidentally writing to the root device
network_chunk_size = 65536 Size of chunks to read/write over the network
node_timeout = 3 Request timeout to external services
servers_per_port = 0 If each disk in each storage policy ring has unique port numbers for its “ip” value, you can use this setting to have each object-server worker only service requests for the single disk matching the port in the ring. The value of this setting determines how many worker processes run for each port (disk) in the
swift_dir = /etc/swift Swift configuration directory
user = swift User to run as
workers = auto a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests.
Description of configuration options for [app-object-server] in object-server.conf
Configuration option = Default value Description
allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object Comma-separated list of headers that can be set in metadata of an object
auto_create_account_prefix = . Prefix to use when automatically creating accounts
keep_cache_private = false Allow non-public objects to stay in kernel’s buffer cache
keep_cache_size = 5242880 Largest object size to keep in buffer cache
max_upload_time = 86400 Maximum time allowed to upload an object
mb_per_sync = 512 On PUT requests, sync file every n MB
replication_concurrency = 4 Set to restrict the number of concurrent incoming REPLICATION requests; set to 0 for unlimited
replication_failure_ratio = 1.0 If the value of failures / successes of REPLICATION subrequests exceeds this ratio, the overall REPLICATION request will be aborted
replication_failure_threshold = 100 The number of subrequest failures before the replication_failure_ratio is checked
replication_lock_timeout = 15 Number of seconds to wait for an existing replication device lock before giving up.
replication_one_per_device = True Restricts incoming REPLICATION requests to one per device, replication_currency above allowing. This can help control I/O to each device, but you may wish to set this to False to allow multiple REPLICATION requests (up to the above replication_concurrency setting) per device.
replication_server = false If defined, tells server how to handle replication verbs in requests. When set to True (or 1), only replication verbs will be accepted. When set to False, replication verbs will be rejected. When undefined, server will accept any verb in the request.
set log_address = /dev/log Location where syslog sends the logs to
set log_facility = LOG_LOCAL0 Syslog log facility
set log_level = INFO Log level
set log_name = object-server Label to use when logging
set log_requests = true Whether or not to log requests
slow = 0 If > 0, Minimum time in seconds for a PUT or DELETE request to complete
splice = no Use splice() for zero-copy object GETs. This requires Linux kernel version 3.0 or greater. When you set “splice = yes” but the kernel does not support it, error messages will appear in the object server logs at startup, but your object servers should continue to function.
threads_per_disk = 0 Size of the per-disk thread pool used for performing disk I/O. The default of 0 means to not use a per-disk thread pool. It is recommended to keep this value small, as large values can result in high read latencies due to large queue depths. A good starting point is 4 threads per disk.
use = egg:swift#object Entry point of paste.deploy in the server
Description of configuration options for [pipeline-main] in object-server.conf
Configuration option = Default value Description
pipeline = healthcheck recon object-server Pipeline to use for processing operations.
Description of configuration options for [object-replicator] in object-server.conf
Configuration option = Default value Description
concurrency = 1 Number of replication workers to spawn
daemonize = on Whether or not to run replication as a daemon
handoff_delete = auto By default handoff partitions will be removed when it has successfully replicated to all the canonical nodes. If set to an integer n, it will remove the partition if it is successfully replicated to n nodes. The default setting should not be changed, except for extremem situations. This uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default).
handoffs_first = False If set to True, partitions that are not supposed to be on the node will be replicated first. The default setting should not be changed, except for extreme situations.
http_timeout = 60 Maximum duration for an HTTP request
interval = 30 Minimum time for a pass to take
lockup_timeout = 1800 Attempts to kill all workers if nothing replications for lockup_timeout seconds
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = object-replicator Label used when logging
node_timeout = <whatever's in the DEFAULT section or 10> Request timeout to external services
reclaim_age = 604800 Time elapsed in seconds before an object can be reclaimed
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
ring_check_interval = 15 How often (in seconds) to check the ring
rsync_bwlimit = 0 bandwidth limit for rsync in kB/s. 0 means unlimited
rsync_compress = no

Allows rsync to compress data which is transmitted to the destination node during sync. However, this applies only when the destination node is in a different region than the local one.

Note

Objects that are already compressed (for example: .tar.gz, .mp3) might slow down the syncing process.

rsync_error_log_line_length = 0 Limits the length of the rsync error log lines. 0 will log the entire line.
rsync_io_timeout = 30 Passed to rsync for a max duration (seconds) of an I/O op
rsync_module = {replication_ip}::object Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default).
rsync_timeout = 900 Max duration (seconds) of a partition rsync
run_pause = 30 Time in seconds to wait between replication passes
stats_interval = 300 Interval in seconds between logging replication statistics
sync_method = rsync default is rsync, alternative is ssync
Description of configuration options for [object-updater] in object-server.conf
Configuration option = Default value Description
concurrency = 1 Number of replication workers to spawn
interval = 300 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = object-updater Label used when logging
node_timeout = <whatever's in the DEFAULT section or 10> Request timeout to external services
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
slowdown = 0.01 Time in seconds to wait between objects
Description of configuration options for [object-auditor] in object-server.conf
Configuration option = Default value Description
bytes_per_second = 10000000 Maximum bytes audited per second. Should be tuned according to individual system specs. 0 is unlimited. mounted to prevent accidentally writing to the root device process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. By increasing the number of workers to a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. underlying filesystem does not support it. to setup custom log handlers. bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. container server. For most cases, this should be
concurrency = 1 Number of replication workers to spawn
disk_chunk_size = 65536 Size of chunks to read/write to disk
files_per_second = 20 Maximum files audited per second. Should be tuned according to individual system specs. 0 is unlimited.
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = object-auditor Label used when logging
log_time = 3600 Frequency of status logs in seconds.
object_size_stats = Takes a comma-separated list of ints. When set, the object auditor will increment a counter for every object whose size is greater or equal to the given breaking points and reports the result after a full scan.
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
zero_byte_files_per_second = 50 Maximum zero byte files audited per second.
Description of configuration options for [filter-healthcheck] in object-server.conf
Configuration option = Default value Description
disable_path = An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE”
use = egg:swift#healthcheck Entry point of paste.deploy in the server
Description of configuration options for [filter-recon] in object-server.conf
Configuration option = Default value Description
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
recon_lock_path = /var/lock Directory where lock files will be stored
use = egg:swift#recon Entry point of paste.deploy in the server
Description of configuration options for [filter-xprofile] in object-server.conf
Configuration option = Default value Description
dump_interval = 5.0 the profile data will be dumped to local disk based on above naming rule in this interval (seconds).
dump_timestamp = false Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory.
flush_at_shutdown = false Clears the data when the wsgi server shutdowns.
log_filename_prefix = /tmp/log/swift/profile/default.profile This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/object.profile
path = /__profile__ This is the path of the URL to access the mini web UI.
profile_module = eventlet.green.profile This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc.
unwind = false unwind the iterator of applications
use = egg:swift#xprofile Entry point of paste.deploy in the server
Sample object server configuration file
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6200
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
# expiring_objects_container_divisor = 86400
# expiring_objects_account_name = expiring_objects
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.  NOTE: if servers_per_port is set, this setting is
# ignored.
# workers = auto
#
# Make object-server run this many worker processes per unique port of "local"
# ring devices across all storage policies. The default value of 0 disables this
# feature.
# servers_per_port = 0
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# Time to wait while attempting to connect to another backend node.
# conn_timeout = 0.5
# Time to wait while sending each chunk of data to another backend node.
# node_timeout = 3
# Time to wait while sending a container update on object update.
# container_update_timeout = 1.0
# Time to wait while receiving each chunk of data from a client or another
# backend node.
# client_timeout = 60
#
# network_chunk_size = 65536
# disk_chunk_size = 65536
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = healthcheck recon object-server

[app:object-server]
use = egg:swift#object
# You can override the default log routing for this app here:
# set log_name = object-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# max_upload_time = 86400
#
# slow is the total amount of seconds an object PUT/DELETE request takes at
# least. If it is faster, the object server will sleep this amount of time minus
# the already passed transaction time.  This is only useful for simulating slow
# devices on storage nodes during testing and development.
# slow = 0
#
# Objects smaller than this are not evicted from the buffercache once read
# keep_cache_size = 5242880
#
# If true, objects for authenticated GET requests may be kept in buffer cache
# if small enough
# keep_cache_private = false
#
# on PUTs, sync data every n MB
# mb_per_sync = 512
#
# Comma separated list of headers that can be set in metadata on an object.
# This list is in addition to X-Object-Meta-* headers and cannot include
# Content-Type, etag, Content-Length, or deleted
# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
#
# Set to restrict the number of concurrent incoming SSYNC requests
# Set to 0 for unlimited
# Note that SSYNC requests are only used by the object reconstructor or the
# object replicator when configured to use ssync.
# replication_concurrency = 4
#
# Restricts incoming SSYNC requests to one per device,
# replication_currency above allowing. This can help control I/O to each
# device, but you may wish to set this to False to allow multiple SSYNC
# requests (up to the above replication_concurrency setting) per device.
# replication_one_per_device = True
#
# Number of seconds to wait for an existing replication device lock before
# giving up.
# replication_lock_timeout = 15
#
# These next two settings control when the SSYNC subrequest handler will
# abort an incoming SSYNC attempt. An abort will occur if there are at
# least threshold number of failures and the value of failures / successes
# exceeds the ratio. The defaults of 100 and 1.0 means that at least 100
# failures have to occur and there have to be more failures than successes for
# an abort to occur.
# replication_failure_threshold = 100
# replication_failure_ratio = 1.0
#
# Use splice() for zero-copy object GETs. This requires Linux kernel
# version 3.0 or greater. If you set "splice = yes" but the kernel
# does not support it, error messages will appear in the object server
# logs at startup, but your object servers should continue to function.
#
# splice = no
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift
#recon_lock_path = /var/lock

[object-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# concurrency = 1
# stats_interval = 300
#
# default is rsync, alternative is ssync
# sync_method = rsync
#
# max duration of a partition rsync
# rsync_timeout = 900
#
# bandwidth limit for rsync in kB/s. 0 means unlimited
# rsync_bwlimit = 0
#
# passed to rsync for io op timeout
# rsync_io_timeout = 30
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might
# slow down the syncing process.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::object
#
# node_timeout = <whatever's in the DEFAULT section or 10>
# max duration of an http request; this is for REPLICATE finalization calls and
# so should be longer than node_timeout
# http_timeout = 60
#
# attempts to kill all workers if nothing replicates for lockup_timeout seconds
# lockup_timeout = 1800
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
#
# limits how long rsync error log lines are
# 0 means to log the entire line
# rsync_error_log_line_length = 0
#
# handoffs_first and handoff_delete are options for a special case
# such as disk full in the cluster. These two options SHOULD NOT BE
# CHANGED, except for such an extreme situations. (e.g. disks filled up
# or are about to fill up. Anyway, DO NOT let your drives fill up)
# handoffs_first is the flag to replicate handoffs prior to canonical
# partitions. It allows to force syncing and deleting handoffs quickly.
# If set to a True value(e.g. "True" or "1"), partitions
# that are not supposed to be on the node will be replicated first.
# handoffs_first = False
#
# handoff_delete is the number of replicas which are ensured in swift.
# If the number less than the number of replicas is set, object-replicator
# could delete local handoffs even if all replicas are not ensured in the
# cluster. Object-replicator would remove local handoff partition directories
# after syncing partition when the number of successful responses is greater
# than or equal to this number. By default(auto), handoff partitions will be
# removed  when it has successfully replicated to all the canonical nodes.
# handoff_delete = auto
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[object-reconstructor]
# You can override the default log routing for this app here (don't use set!):
# Unless otherwise noted, each setting below has the same meaning as described
# in the [object-replicator] section, however these settings apply to the EC
# reconstructor
#
# log_name = object-reconstructor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between reconstruction passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# concurrency = 1
# stats_interval = 300
# node_timeout = 10
# http_timeout = 60
# lockup_timeout = 1800
# reclaim_age = 604800
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
# handoffs_first = False
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[object-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300
# concurrency = 1
# node_timeout = <whatever's in the DEFAULT section or 10>
# slowdown will sleep that amount between objects
# slowdown = 0.01
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[object-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Time in seconds to wait between auditor passes
# interval = 30
#
# You can set the disk chunk size that the auditor uses making it larger if
# you like for more efficient local auditing of larger objects
# disk_chunk_size = 65536
# files_per_second = 20
# concurrency = 1
# bytes_per_second = 10000000
# log_time = 3600
# zero_byte_files_per_second = 50
# recon_cache_path = /var/cache/swift

# Takes a comma separated list of ints. If set, the object auditor will
# increment a counter for every object whose size is <= to the given break
# points and report the result after a full scan.
# object_size_stats =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

# The auditor will cleanup old rsync tempfiles after they are "old
# enough" to delete.  You can configure the time elapsed in seconds
# before rsync tempfiles will be unlinked, or the default value of
# "auto" try to use object-replicator's rsync_timeout + 900 and fallback
# to 86400 (1 day).
# rsync_tempfile_timeout = auto

# Note: Put it at the beginning of the pipleline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/object.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

Object expirer configuration

Find an example object expirer configuration at etc/object-expirer.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [DEFAULT] in object-expirer.conf
Configuration option = Default value Description
log_address = /dev/log Location where syslog sends the logs to
log_custom_handlers = Comma-separated list of functions to call to setup custom log handlers.
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_max_line_length = 0 Caps the length of log lines to the value given; no limit if set to 0, the default.
log_name = swift Label used when logging
log_statsd_default_sample_rate = 1.0 Defines the probability of sending a sample for any given event or timing measurement.
log_statsd_host = localhost If not set, the StatsD feature is disabled.
log_statsd_metric_prefix = Value will be prepended to every metric sent to the StatsD server.
log_statsd_port = 8125 Port value for the StatsD server.
log_statsd_sample_rate_factor = 1.0 Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead.
log_udp_host = If not set, the UDP receiver for syslog is disabled.
log_udp_port = 514 Port value for UDP receiver, if enabled.
swift_dir = /etc/swift Swift configuration directory
user = swift User to run as
Description of configuration options for [app-proxy-server] in object-expirer.conf
Configuration option = Default value Description
use = egg:swift#proxy Entry point of paste.deploy in the server
Description of configuration options for [filter-cache] in object-expirer.conf
Configuration option = Default value Description
use = egg:swift#memcache Entry point of paste.deploy in the server
Description of configuration options for [filter-catch_errors] in object-expirer.conf
Configuration option = Default value Description
use = egg:swift#catch_errors Entry point of paste.deploy in the server
Description of configuration options for [filter-proxy-logging] in object-expirer.conf
Configuration option = Default value Description
access_log_address = /dev/log Location where syslog sends the logs to. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_facility = LOG_LOCAL0 Syslog facility to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_headers = false Header to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_headers_only = If access_log_headers is True and access_log_headers_only is set only these headers are logged. Multiple headers can be defined as comma separated list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
access_log_level = INFO Syslog logging level to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_name = swift Label used when logging. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_default_sample_rate = 1.0 Defines the probability of sending a sample for any given event or timing measurement. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_host = localhost You can use log_statsd_* from [DEFAULT], or override them here. StatsD server. IPv4/IPv6 addresses and hostnames are supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used.
access_log_statsd_metric_prefix = Value will be prepended to every metric sent to the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_port = 8125 Port value for the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_sample_rate_factor = 1.0 Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_udp_host = If not set, the UDP receiver for syslog is disabled. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_udp_port = 514 Port value for UDP receiver, if enabled. If not set, logging directives from [DEFAULT] without “access_” will be used.
log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS What HTTP methods are allowed for StatsD logging (comma-sep). request methods not in this list will have “BAD_METHOD” for the <verb> portion of the metric.
reveal_sensitive_prefix = 16

By default, the X-Auth-Token is logged. To obscure the value, set reveal_sensitive_prefix to the number of characters to log. For example, if set to 12, only the first 12 characters of the token appear in the log. An unauthorized access of the log file won’t allow unauthorized usage of the token. However, the first 12 or so characters is unique enough that you can trace/debug token usage. Set to 0 to suppress the token completely (replaced by ‘...’ in the log).

Note

reveal_sensitive_prefix will not affect the value logged with access_log_headers=True.

use = egg:swift#proxy_logging Entry point of paste.deploy in the server
Description of configuration options for [object-expirer] in object-expirer.conf
Configuration option = Default value Description
auto_create_account_prefix = . Prefix to use when automatically creating accounts
concurrency = 1 Number of replication workers to spawn
expiring_objects_account_name = expiring_objects Account name for expiring objects.
interval = 300 Minimum time for a pass to take
process = 0 (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently.
processes = 0 for each port (disk) in the ring. If you have 24 disks per server, and this setting is 4, then each storage node will have 1 + (24 * 4) = 97 total object-server processes running. This gives complete I/O isolation, drastically reducing the impact of slow disks on storage node performance. The object-replicator and object-reconstructor need to see this setting too, so it must be in the [DEFAULT] section.
reclaim_age = 604800 Time elapsed in seconds before an object can be reclaimed
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
report_interval = 300 Interval in seconds between reports.
Description of configuration options for [pipeline-main] in object-expirer.conf
Configuration option = Default value Description
pipeline = catch_errors proxy-logging cache proxy-server Pipeline to use for processing operations.
Sample object expirer configuration file
[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are realtime, best-effort and idle. I/O niceness
# priority is a number which goes from 0 to 7. The higher the value, the lower
# the I/O priority of the process. Work only with ionice_class.
# ionice_class =
# ionice_priority =

[object-expirer]
# interval = 300
# auto_create_account_prefix = .
# expiring_objects_account_name = expiring_objects
# report_interval = 300
# concurrency is the level of concurrency o use to do the work, this value
# must be set to at least 1
# concurrency = 1
# processes is how many parts to divide the work into, one part per process
#   that will be doing the work
# processes set 0 means that a single process will be doing all the work
# processes can also be specified on the command line and will override the
#   config value
# processes = 0
# process is which of the parts a particular process will work on
# process can also be specified on the command line and will override the config
#   value
# process is "zero based", if you want to use 3 processes, you should run
#  processes with process set to 0, 1, and 2
# process = 0
# The expirer will re-attempt expiring if the source object is not available
# up to reclaim_age seconds before it gives up and deletes the entry in the
# queue.
# reclaim_age = 604800
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are realtime, best-effort and idle. I/O niceness
# priority is a number which goes from 0 to 7. The higher the value, the lower
# the I/O priority of the process. Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server

[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options

[filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port = 514
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host =
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to 12, only the first 12 characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# 12 or so characters is unique enough that you can trace/debug
# token usage. Set to 0 to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix = 16
#
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the <verb> portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS

Container server configuration

Find an example container server configuration at etc/container-server.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [DEFAULT] in container-server.conf
Configuration option = Default value Description
allowed_sync_hosts = 127.0.0.1 The list of hosts that are allowed to send syncs to.
backlog = 4096 Maximum number of allowed pending TCP connections
bind_ip = 0.0.0.0 IP Address for server to bind to
bind_port = 6001 Port for server to bind to
bind_timeout = 30 Seconds to attempt bind before giving up
db_preallocation = off If you don’t mind the extra disk space usage in overhead, you can turn this on to preallocate disk space with SQLite databases to decrease fragmentation. underlying filesystem does not support it. to setup custom log handlers. bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be
devices = /srv/node Parent directory of where devices are mounted
disable_fallocate = false Disable “fast fail” fallocate checks if the underlying filesystem does not support it.
eventlet_debug = false If true, turn on debug logging for eventlet
fallocate_reserve = 0 You can set fallocate_reserve to the number of bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be
log_address = /dev/log Location where syslog sends the logs to
log_custom_handlers = Comma-separated list of functions to call to setup custom log handlers.
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_max_line_length = 0 Caps the length of log lines to the value given; no limit if set to 0, the default.
log_name = swift Label used when logging
log_statsd_default_sample_rate = 1.0 Defines the probability of sending a sample for any given event or timing measurement.
log_statsd_host = localhost If not set, the StatsD feature is disabled.
log_statsd_metric_prefix = Value will be prepended to every metric sent to the StatsD server.
log_statsd_port = 8125 Port value for the StatsD server.
log_statsd_sample_rate_factor = 1.0 Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead.
log_udp_host = If not set, the UDP receiver for syslog is disabled.
log_udp_port = 514 Port value for UDP receiver, if enabled.
max_clients = 1024 Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing other workers a chance to process it.
mount_check = true Whether or not check if the devices are mounted to prevent accidentally writing to the root device
swift_dir = /etc/swift Swift configuration directory
user = swift User to run as
workers = auto a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests.
Description of configuration options for [app-container-server] in container-server.conf
Configuration option = Default value Description
allow_versions = false Enable/Disable object versioning feature
auto_create_account_prefix = . Prefix to use when automatically creating accounts
conn_timeout = 0.5 Connection timeout to external services
node_timeout = 3 Request timeout to external services
replication_server = false If defined, tells server how to handle replication verbs in requests. When set to True (or 1), only replication verbs will be accepted. When set to False, replication verbs will be rejected. When undefined, server will accept any verb in the request.
set log_address = /dev/log Location where syslog sends the logs to
set log_facility = LOG_LOCAL0 Syslog log facility
set log_level = INFO Log level
set log_name = container-server Label to use when logging
set log_requests = true Whether or not to log requests
use = egg:swift#container Entry point of paste.deploy in the server
Description of configuration options for [pipeline-main] in container-server.conf
Configuration option = Default value Description
pipeline = healthcheck recon container-server Pipeline to use for processing operations.
Description of configuration options for [container-replicator] in container-server.conf
Configuration option = Default value Description
concurrency = 8 Number of replication workers to spawn
conn_timeout = 0.5 Connection timeout to external services
interval = 30 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = container-replicator Label used when logging
max_diffs = 100 Caps how long the replicator spends trying to sync a database per pass
node_timeout = 10 Request timeout to external services
per_diff = 1000 Limit number of items to get per diff
reclaim_age = 604800 Time elapsed in seconds before an object can be reclaimed
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
rsync_compress = no Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one.
rsync_module = {replication_ip}::container Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default).
run_pause = 30 Time in seconds to wait between replication passes
Description of configuration options for [container-updater] in container-server.conf
Configuration option = Default value Description
account_suppression_time = 60 Seconds to suppress updating an account that has generated an error (timeout, not yet found, etc.)
concurrency = 4 Number of replication workers to spawn
conn_timeout = 0.5 Connection timeout to external services
interval = 300 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = container-updater Label used when logging
node_timeout = 3 Request timeout to external services
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
slowdown = 0.01 Time in seconds to wait between objects
Description of configuration options for [container-auditor] in container-server.conf
Configuration option = Default value Description
containers_per_second = 200 Maximum containers audited per second. Should be tuned according to individual system specs. 0 is unlimited. mounted to prevent accidentally writing to the root device process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. By increasing the number of workers to a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests.
interval = 1800 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = container-auditor Label used when logging
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
Description of configuration options for [container-sync] in container-server.conf
Configuration option = Default value Description
conn_timeout = 5 Connection timeout to external services
container_time = 60 Maximum amount of time to spend syncing each container
internal_client_conf_path = /etc/swift/internal-client.conf Internal client config file path
interval = 300 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = container-sync Label used when logging
request_tries = 3 Server errors from requests will be retried by default
sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888 If you need to use an HTTP proxy, set it here. Defaults to no proxy.
Description of configuration options for [filter-healthcheck] in container-server.conf
Configuration option = Default value Description
disable_path = An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE”
use = egg:swift#healthcheck Entry point of paste.deploy in the server
Description of configuration options for [filter-recon] in container-server.conf
Configuration option = Default value Description
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
use = egg:swift#recon Entry point of paste.deploy in the server
Description of configuration options for [filter-xprofile] in container-server.conf
Configuration option = Default value Description
dump_interval = 5.0 the profile data will be dumped to local disk based on above naming rule in this interval (seconds).
dump_timestamp = false Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory.
flush_at_shutdown = false Clears the data when the wsgi server shutdowns.
log_filename_prefix = /tmp/log/swift/profile/default.profile This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/object.profile
path = /__profile__ This is the path of the URL to access the mini web UI.
profile_module = eventlet.green.profile This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc.
unwind = false unwind the iterator of applications
use = egg:swift#xprofile Entry point of paste.deploy in the server
Sample container server configuration file
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6201
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# This is a comma separated list of hosts allowed in the X-Container-Sync-To
# field for containers. This is the old-style of using container sync. It is
# strongly recommended to use the new style of a separate
# container-sync-realms.conf -- see container-sync-realms.conf-sample
# allowed_sync_hosts = 127.0.0.1
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = healthcheck recon container-server

[app:container-server]
use = egg:swift#container
# You can override the default log routing for this app here:
# set log_name = container-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# node_timeout = 3
# conn_timeout = 0.5
# allow_versions = false
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift

[container-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::container
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[container-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300
# concurrency = 4
# node_timeout = 3
# conn_timeout = 0.5
#
# slowdown will sleep that amount between containers
# slowdown = 0.01
#
# Seconds to suppress updating an account that has generated an error
# account_suppression_time = 60
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[container-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each container at most once per interval
# interval = 1800
#
# containers_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[container-sync]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-sync
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
# You can also set this to a comma separated list of HTTP Proxies and they will
# be randomly used (simple load balancing).
# sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888
#
# Will sync each container at most once per interval
# interval = 300
#
# Maximum amount of time to spend syncing each container per pass
# container_time = 60
#
# Maximum amount of time in seconds for the connection attempt
# conn_timeout = 5
# Server errors from requests will be retried by default
# request_tries = 3
#
# Internal client config file path
# internal_client_conf_path = /etc/swift/internal-client.conf
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/container.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

Container sync realms configuration

Find an example container sync realms configuration at etc/container-sync-realms.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [DEFAULT] in container-sync-realms.conf
Configuration option = Default value Description
mtime_check_interval = 300 The number of seconds between checking the modified time of this config file for changes and therefore reloading it.
Description of configuration options for [realm1] in container-sync-realms.conf
Configuration option = Default value Description
cluster_clustername1 = https://host1/v1/ Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive.
cluster_clustername2 = https://host2/v1/ Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive.
key = realm1key The key is the overall cluster-to-cluster key used in combination with the external users’ key that they set on their containers’ X-Container-Sync-Key metadata header values. These keys will be used to sign each request the container sync daemon makes and used to validate each incoming container sync request.
key2 = realm1key2 The key2 is optional and is an additional key incoming requests will be checked against. This is so you can rotate keys if you wish; you move the existing key to key2 and make a new key value.
Description of configuration options for [realm2] in container-sync-realms.conf
Configuration option = Default value Description
cluster_clustername3 = https://host3/v1/ Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive.
cluster_clustername4 = https://host4/v1/ Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive.
key = realm2key The key is the overall cluster-to-cluster key used in combination with the external users’ key that they set on their containers’ X-Container-Sync-Key metadata header values. These keys will be used to sign each request the container sync daemon makes and used to validate each incoming container sync request.
key2 = realm2key2 The key2 is optional and is an additional key incoming requests will be checked against. This is so you can rotate keys if you wish; you move the existing key to key2 and make a new key value.
Sample container sync realms configuration file
# [DEFAULT]
# The number of seconds between checking the modified time of this config file
# for changes and therefore reloading it.
# mtime_check_interval = 300


# [realm1]
# key = realm1key
# key2 = realm1key2
# cluster_clustername1 = https://host1/v1/
# cluster_clustername2 = https://host2/v1/
#
# [realm2]
# key = realm2key
# key2 = realm2key2
# cluster_clustername3 = https://host3/v1/
# cluster_clustername4 = https://host4/v1/


# Each section name is the name of a sync realm. A sync realm is a set of
# clusters that have agreed to allow container syncing with each other. Realm
# names will be considered case insensitive.
#
# The key is the overall cluster-to-cluster key used in combination with the
# external users' key that they set on their containers' X-Container-Sync-Key
# metadata header values. These keys will be used to sign each request the
# container sync daemon makes and used to validate each incoming container sync
# request.
#
# The key2 is optional and is an additional key incoming requests will be
# checked against. This is so you can rotate keys if you wish; you move the
# existing key to key2 and make a new key value.
#
# Any values in the realm section whose names begin with cluster_ will indicate
# the name and endpoint of a cluster and will be used by external users in
# their containers' X-Container-Sync-To metadata header values with the format
# "realm_name/cluster_name/container_name". Realm and cluster names are
# considered case insensitive.
#
# The endpoint is what the container sync daemon will use when sending out
# requests to that cluster. Keep in mind this endpoint must be reachable by all
# container servers, since that is where the container sync daemon runs. Note
# that the endpoint ends with /v1/ and that the container sync daemon will then
# add the account/container/obj name after that.
#
# Distribute this container-sync-realms.conf file to all your proxy servers
# and container servers.

Container reconciler configuration

Find an example container sync realms configuration at etc/container-reconciler.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [DEFAULT] in container-reconciler.conf
Configuration option = Default value Description
log_address = /dev/log Location where syslog sends the logs to
log_custom_handlers = Comma-separated list of functions to call to setup custom log handlers.
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = swift Label used when logging
log_statsd_default_sample_rate = 1.0 Defines the probability of sending a sample for any given event or timing measurement.
log_statsd_host = localhost If not set, the StatsD feature is disabled.
log_statsd_metric_prefix = Value will be prepended to every metric sent to the StatsD server.
log_statsd_port = 8125 Port value for the StatsD server.
log_statsd_sample_rate_factor = 1.0 Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead.
log_udp_host = If not set, the UDP receiver for syslog is disabled.
log_udp_port = 514 Port value for UDP receiver, if enabled.
swift_dir = /etc/swift Swift configuration directory
user = swift User to run as
Description of configuration options for [app-proxy-server] in container-reconciler.conf
Configuration option = Default value Description
use = egg:swift#proxy Entry point of paste.deploy in the server
Description of configuration options for [container-reconciler] in container-reconciler.conf
Configuration option = Default value Description
interval = 30 Minimum time for a pass to take
reclaim_age = 604800 Time elapsed in seconds before an object can be reclaimed
request_tries = 3 Server errors from requests will be retried by default
Description of configuration options for [filter-cache] in container-reconciler.conf
Configuration option = Default value Description
use = egg:swift#memcache Entry point of paste.deploy in the server
Description of configuration options for [filter-catch_errors] in container-reconciler.conf
Configuration option = Default value Description
use = egg:swift#catch_errors Entry point of paste.deploy in the server
Description of configuration options for [filter-proxy-logging] in container-reconciler.conf
Configuration option = Default value Description
use = egg:swift#proxy_logging Entry point of paste.deploy in the server
Description of configuration options for [pipeline-main] in container-reconciler.conf
Configuration option = Default value Description
pipeline = catch_errors proxy-logging cache proxy-server Pipeline to use for processing operations.
Sample container sync reconciler configuration file
[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[container-reconciler]
# The reconciler will re-attempt reconciliation if the source object is not
# available up to reclaim_age seconds before it gives up and deletes the entry
# in the queue.
# reclaim_age = 604800
# The cycle time of the daemon
# interval = 30
# Server errors from requests will be retried by default
# request_tries = 3
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server

[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options

[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options

[filter:proxy-logging]
use = egg:swift#proxy_logging

[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options

Account server configuration

Find an example account server configuration at etc/account-server.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [DEFAULT] in account-server.conf
Configuration option = Default value Description
backlog = 4096 Maximum number of allowed pending TCP connections
bind_ip = 0.0.0.0 IP Address for server to bind to
bind_port = 6002 Port for server to bind to
bind_timeout = 30 Seconds to attempt bind before giving up
db_preallocation = off If you don’t mind the extra disk space usage in overhead, you can turn this on to preallocate disk space with SQLite databases to decrease fragmentation. underlying filesystem does not support it. to setup custom log handlers. bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be
devices = /srv/node Parent directory of where devices are mounted
disable_fallocate = false Disable “fast fail” fallocate checks if the underlying filesystem does not support it.
eventlet_debug = false If true, turn on debug logging for eventlet
fallocate_reserve = 0 You can set fallocate_reserve to the number of bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be
log_address = /dev/log Location where syslog sends the logs to
log_custom_handlers = Comma-separated list of functions to call to setup custom log handlers.
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_max_line_length = 0 Caps the length of log lines to the value given; no limit if set to 0, the default.
log_name = swift Label used when logging
log_statsd_default_sample_rate = 1.0 Defines the probability of sending a sample for any given event or timing measurement.
log_statsd_host = localhost If not set, the StatsD feature is disabled.
log_statsd_metric_prefix = Value will be prepended to every metric sent to the StatsD server.
log_statsd_port = 8125 Port value for the StatsD server.
log_statsd_sample_rate_factor = 1.0 Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead.
log_udp_host = If not set, the UDP receiver for syslog is disabled.
log_udp_port = 514 Port value for UDP receiver, if enabled.
max_clients = 1024 Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing other workers a chance to process it.
mount_check = true Whether or not check if the devices are mounted to prevent accidentally writing to the root device
swift_dir = /etc/swift Swift configuration directory
user = swift User to run as
workers = auto a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests.
Description of configuration options for [app-account-server] in account-server.conf
Configuration option = Default value Description
auto_create_account_prefix = . Prefix to use when automatically creating accounts
replication_server = false If defined, tells server how to handle replication verbs in requests. When set to True (or 1), only replication verbs will be accepted. When set to False, replication verbs will be rejected. When undefined, server will accept any verb in the request.
set log_address = /dev/log Location where syslog sends the logs to
set log_facility = LOG_LOCAL0 Syslog log facility
set log_level = INFO Log level
set log_name = account-server Label to use when logging
set log_requests = true Whether or not to log requests
use = egg:swift#account Entry point of paste.deploy in the server
Description of configuration options for [pipeline-main] in account-server.conf
Configuration option = Default value Description
pipeline = healthcheck recon account-server Pipeline to use for processing operations.
Description of configuration options for [account-replicator] in account-server.conf
Configuration option = Default value Description
concurrency = 8 Number of replication workers to spawn
conn_timeout = 0.5 Connection timeout to external services
interval = 30 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = account-replicator Label used when logging
max_diffs = 100 Caps how long the replicator spends trying to sync a database per pass
node_timeout = 10 Request timeout to external services
per_diff = 1000 Limit number of items to get per diff
reclaim_age = 604800 Time elapsed in seconds before an object can be reclaimed
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
rsync_compress = no Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one.
rsync_module = {replication_ip}::account Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default).
run_pause = 30 Time in seconds to wait between replication passes
Description of configuration options for [account-auditor] in account-server.conf
Configuration option = Default value Description
accounts_per_second = 200 Maximum accounts audited per second. Should be tuned according to individual system specs. 0 is unlimited.
interval = 1800 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = account-auditor Label used when logging
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
Description of configuration options for [account-reaper] in account-server.conf
Configuration option = Default value Description
concurrency = 25 Number of replication workers to spawn
conn_timeout = 0.5 Connection timeout to external services
delay_reaping = 0 Normally, the reaper begins deleting account information for deleted accounts immediately; you can set this to delay its work however. The value is in seconds, 2592000 = 30 days, for example. bind to giving up worker can process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. By increasing the number of workers to a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests.
interval = 3600 Minimum time for a pass to take
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_level = INFO Logging level
log_name = account-reaper Label used when logging
node_timeout = 10 Request timeout to external services
reap_warn_after = 2592000

If the account fails to be reaped due to a persistent error, the account reaper will log a message such as:

Account <name> has not been reaped since <date>

You can search logs for this message if space is not being reclaimed after you delete account(s). This is in addition to any time requested by delay_reaping.

Description of configuration options for [filter-healthcheck] in account-server.conf
Configuration option = Default value Description
disable_path = An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE”
use = egg:swift#healthcheck Entry point of paste.deploy in the server
Description of configuration options for [filter-recon] in account-server.conf
Configuration option = Default value Description
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
use = egg:swift#recon Entry point of paste.deploy in the server
Description of configuration options for [filter-xprofile] in account-server.conf
Configuration option = Default value Description
dump_interval = 5.0 the profile data will be dumped to local disk based on above naming rule in this interval (seconds).
dump_timestamp = false Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory.
flush_at_shutdown = false Clears the data when the wsgi server shutdowns.
log_filename_prefix = /tmp/log/swift/profile/default.profile This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/accoutn.profile
path = /__profile__ This is the path of the URL to access the mini web UI.
profile_module = eventlet.green.profile This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc.
unwind = false unwind the iterator of applications
use = egg:swift#xprofile Entry point of paste.deploy in the server
Sample account server configuration file
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6202
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
pipeline = healthcheck recon account-server

[app:account-server]
use = egg:swift#account
# You can override the default log routing for this app here:
# set log_name = account-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server". Default is empty.
# replication_server = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =

[filter:recon]
use = egg:swift#recon
# recon_cache_path = /var/cache/swift

[account-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::account
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[account-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each account at most once per interval
# interval = 1800
#
# accounts_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[account-reaper]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-reaper
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# concurrency = 25
# interval = 3600
# node_timeout = 10
# conn_timeout = 0.5
#
# Normally, the reaper begins deleting account information for deleted accounts
# immediately; you can set this to delay its work however. The value is in
# seconds; 2592000 = 30 days for example.
# delay_reaping = 0
#
# If the account fails to be reaped due to a persistent error, the
# account reaper will log a message such as:
#     Account <name> has not been reaped since <date>
# You can search logs for this message if space is not being reclaimed
# after you delete account(s).
# Default is 2592000 seconds (30 days). This is in addition to any time
# requested by delay_reaping.
# reap_warn_after = 2592000
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/account.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

Proxy server configuration

Find an example proxy server configuration at etc/proxy-server.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [app-proxy-server] in proxy-server.conf
Configuration option = Default value Description
account_autocreate = false If set to ‘true’ authorized accounts that do not yet exist within the Swift cluster will be automatically created.
allow_account_management = false Whether account PUTs and DELETEs are even callable.
auto_create_account_prefix = . Prefix to use when automatically creating accounts.
client_chunk_size = 65536 Chunk size to read from clients.
conn_timeout = 0.5 Connection timeout to external services.
deny_host_headers = Comma separated list of Host headers to which the proxy will deny requests.
error_suppression_interval = 60 Time in seconds that must elapse since the last error for a node to be considered no longer error limited.
error_suppression_limit = 10 Error count to consider a node error limited.
log_handoffs = true

Log handoff requests if handoff logging is enabled and the handoff was not expected.

We only log handoffs when we’ve pushed the handoff count further than we would normally have expected under normal circumstances, that is (request_node_count - num_primaries), when handoffs goes higher than that it means one of the primaries must have been skipped because of error limiting before we consumed all of our nodes_left.

max_containers_per_account = 0 If set to a positive value, trying to create a container when the account already has at least this maximum containers will result in a 403 Forbidden. Note: This is a soft limit, meaning a user might exceed the cap for recheck_account_existence before the 403s kick in.
max_containers_whitelist = is a comma separated list of account names that ignore the max_containers_per_account cap.
node_timeout = 10 Request timeout to external services.
object_chunk_size = 65536 Chunk size to read from object servers.
object_post_as_copy = true Set object_post_as_copy = false to turn on fast posts where only the metadata changes are stored anew and the original data file is kept in place. This makes for quicker posts; but since the container metadata isn’t updated in this mode, features like container sync won’t be able to sync posts.
post_quorum_timeout = 0.5 How long to wait for requests to finish after a quorum has been established.
put_queue_depth = 10 Depth of the proxy put queue.
read_affinity = r1z1=100, r1z2=200, r2=300

Which backend servers to prefer on reads. Format is r<N> for region N or r<N>z<M> for region N, zone M. The value after the equals is the priority; lower numbers are higher priority.

Example: first read from region 1 zone 1, then region 1 zone 2, then anything in region 2, then everything else: read_affinity = r1z1=100, r1z2=200, r2=300

Default is empty, meaning no preference.

recheck_account_existence = 60 Cache timeout in seconds to send memcached for account existence.
recheck_container_existence = 60 Cache timeout in seconds to send memcached for container existence.
recoverable_node_timeout = node_timeout Request timeout to external services for requests that, on failure, can be recovered from. For example, object GET. from a client external services.
request_node_count = 2 * replicas replicas Set to the number of nodes to contact for a normal request. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request. conf file for values will only be shown to the list of swift_owners. The exact default definition of a swift_owner is headers> up to the auth system in use, but usually indicates administrative responsibilities. paste.deploy to use for auth. To use tempauth set to:
set log_address = /dev/log Location where syslog sends the logs to.
set log_facility = LOG_LOCAL0 Syslog log facility.
set log_level = INFO Log level.
set log_name = proxy-server Label to use when logging.
sorting_method = shuffle

Storage nodes can be chosen at random (shuffle), by using timing measurements (timing), or by using an explicit match (affinity). Using timing measurements may allow for lower overall latency, while using affinity allows for finer control. In both the timing and affinity cases, equally-sorting nodes are still randomly chosen to spread load.

The valid values for sorting_method are “affinity”, “shuffle”, or “timing”.

swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control These are the headers whose conf file for values will only be shown to the list of swift_owners. The exact default definition of a swift_owner is headers> up to the auth system in use, but usually indicates administrative responsibilities. paste.deploy to use for auth. To use tempauth set to:
timing_expiry = 300 If the “timing” sorting_method is used, the timings will only be valid for the number of seconds configured by timing_expiry.
use = egg:swift#proxy Entry point of paste.deploy in the server.
write_affinity = r1, r2 This setting lets you trade data distribution for throughput. It makes the proxy server prefer local back-end servers for object PUT requests over non-local ones. Note that only object PUT requests are affected by the write_affinity setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT requests are not affected. The format is r<N> for region N or r<N>z<M> for region N, zone M. If this is set, then when handling an object PUT request, some number (see the write_affinity_node_count setting) of local backend servers will be tried before any nonlocal ones. Example: try to write to regions 1 and 2 before writing to any other nodes: write_affinity = r1, r2
write_affinity_node_count = 2 * replicas This setting is only useful in conjunction with write_affinity; it governs how many local object servers will be tried before falling back to non-local ones. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request: write_affinity_node_count = 2 * replicas
Description of configuration options for [pipeline-main] in proxy-server.conf
Configuration option = Default value Description
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server Pipeline to use for processing operations.
Description of configuration options for [filter-account-quotas] in proxy-server.conf
Configuration option = Default value Description
use = egg:swift#account_quotas Entry point of paste.deploy in the server
Description of configuration options for [filter-authtoken] in proxy-server.conf
Configuration option = Default value Description
auth_plugin = password Authentication module to use.
auth_uri = http://keystonehost:5000 auth_uri should point to a Keystone service from which users may retrieve tokens. This value is used in the WWW-Authenticate header that auth_token sends with any denial response.
auth_url = http://keystonehost:35357 auth_url points to the Keystone Admin service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens. It is not necessary to append any Keystone API version number to this URI.
cache = swift.cache cache is set to swift.cache. This means that the middleware will get the Swift memcache from the request environment.
delay_auth_decision = False delay_auth_decision defaults to False, but leaving it as false will prevent other auth systems, staticweb, tempurl, formpost, and ACLs from working. This value must be explicitly set to True.
include_service_catalog = False include_service_catalog defaults to True if not set. This means that when validating a token, the service catalog is retrieved and stored in the X-Service-Catalog header. Since Swift does not use the X-Service-Catalog header, there is no point in getting the service catalog. We recommend you set include_service_catalog to False.
password = password Password for service user.
paste.filter_factory = keystonemiddleware.auth_token:filter_factory Entry point of paste.filter_factory in the server.
project_domain_id = default Service project domain.
project_name = service Service project name.
user_domain_id = default Service user domain.
username = swift Service user name.
Description of configuration options for [filter-cache] in proxy-server.conf
Configuration option = Default value Description
memcache_max_connections = 2 Max number of connections to each memcached server per worker services
memcache_serialization_support = 2 Sets how memcache values are serialized and deserialized
memcache_servers = 127.0.0.1:11211 Comma-separated list of memcached servers ip:port services
set log_address = /dev/log Location where syslog sends the logs to
set log_facility = LOG_LOCAL0 Syslog log facility
set log_headers = false If True, log headers in each request
set log_level = INFO Log level
set log_name = cache Label to use when logging
use = egg:swift#memcache Entry point of paste.deploy in the server
Description of configuration options for [filter-catch_errors] in proxy-server.conf
Configuration option = Default value Description
set log_address = /dev/log Location where syslog sends the logs to
set log_facility = LOG_LOCAL0 Syslog log facility
set log_headers = false If True, log headers in each request
set log_level = INFO Log level
set log_name = catch_errors Label to use when logging
use = egg:swift#catch_errors Entry point of paste.deploy in the server
Description of configuration options for [filter-container_sync] in proxy-server.conf
Configuration option = Default value Description
allow_full_urls = true Set this to false if you want to disallow any full URL values to be set for any new X-Container-Sync-To headers. This will keep any new full URLs from coming in, but won’t change any existing values already in the cluster. Updating those will have to be done manually, as knowing what the true realm endpoint should be cannot always be guessed.
current = //REALM/CLUSTER Set this to specify this cluster //realm/cluster as “current” in /info.
use = egg:swift#container_sync Entry point of paste.deploy in the server.
Description of configuration options for [filter-dlo] in proxy-server.conf
Configuration option = Default value Description
max_get_time = 86400 Time limit on GET requests (seconds).
rate_limit_after_segment = 10 Rate limit the download of large object segments after this segment is downloaded.
rate_limit_segments_per_sec = 1 Rate limit large object downloads at this rate. contact for a normal request. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request. paste.deploy to use for auth. To use tempauth set to:
use = egg:swift#dlo Entry point of paste.deploy in the server.
Description of configuration options for [filter-versioned_writes] in proxy-server.conf
Configuration option = Default value Description
allow_versioned_writes = false

Enables using versioned writes middleware and exposing configuration settings via HTTP GET /info.

Warning

Setting this option bypasses the allow_versions option in the container configuration file, which will be eventually deprecated. For more details, see Object Versioning.

use = egg:swift#versioned_writes Entry point of paste.deploy in the server.
Description of configuration options for [filter-gatekeeper] in proxy-server.conf
Configuration option = Default value Description
set log_address = /dev/log Location where syslog sends the logs to
set log_facility = LOG_LOCAL0 Syslog log facility
set log_headers = false If True, log headers in each request
set log_level = INFO Log level
set log_name = gatekeeper Label to use when logging
use = egg:swift#gatekeeper Entry point of paste.deploy in the server
Description of configuration options for [filter-healthcheck] in proxy-server.conf
Configuration option = Default value Description
disable_path = An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE”.
use = egg:swift#healthcheck Entry point of paste.deploy in the server.
Description of configuration options for [filter-keystoneauth] in proxy-server.conf
Configuration option = Default value Description
allow_names_in_acls = true The backwards compatible behavior can be disabled by setting this option to False.
allow_overrides = true This option allows middleware higher in the WSGI pipeline to override auth processing, useful for middleware such as tempurl and formpost. If you know you are not going to use such middleware and you want a bit of extra security, you can set this to False.
default_domain_id = default Name of the default domain. It is identified by its UUID, which by default has the value “default”.
is_admin = false If this option is set to True, it allows to give a user whose username is the same as the project name and who has any role in the project access rights elevated to be the same as if the user had one of the operator_roles. Note that the condition compares names rather than UUIDs. This option is deprecated. It is False by default.
operator_roles = admin, swiftoperator Operator role defines the user which is allowed to manage a tenant and create containers or give ACL to others. This parameter may be prefixed with an appropriate prefix.
reseller_admin_role = ResellerAdmin The reseller admin role gives the ability to create and delete accounts.
reseller_prefix = AUTH The naming scope for the auth service.
service_roles = When present, this option requires that the X-Service-Token header supplies a token from a user who has a role listed in service_roles. This parameter may be prefixed with an appropriate prefix.
use = egg:swift#keystoneauth Entry point of paste.deploy in the server.
Description of configuration options for [filter-list-endpoints] in proxy-server.conf
Configuration option = Default value Description
list_endpoints_path = /endpoints/ Path to list endpoints for an object, account or container.
use = egg:swift#list_endpoints Entry point of paste.deploy in the server.
Description of configuration options for [filter-proxy-logging] in proxy-server.conf
Configuration option = Default value Description
access_log_address = /dev/log Location where syslog sends the logs to. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_facility = LOG_LOCAL0 Syslog facility to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_headers = false Header to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_headers_only = If access_log_headers is True and access_log_headers_only is set only these headers are logged. Multiple headers can be defined as comma separated list like this: access_log_headers_only = Host, X-Object-Meta-Mtime.
access_log_level = INFO Syslog logging level to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_name = swift Label used when logging. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_default_sample_rate = 1.0 Defines the probability of sending a sample for any given event or timing measurement. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_host = localhost You can use log_statsd_* from [DEFAULT], or override them here. StatsD server. IPv4/IPv6 addresses and hostnames are supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used.
access_log_statsd_metric_prefix = Value will be prepended to every metric sent to the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_port = 8125 Port value for the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_statsd_sample_rate_factor = 1.0 Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_udp_host = If not set, the UDP receiver for syslog is disabled. If not set, logging directives from [DEFAULT] without “access_” will be used.
access_log_udp_port = 514 Port value for UDP receiver, if enabled. If not set, logging directives from [DEFAULT] without “access_” will be used.
log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS What HTTP methods are allowed for StatsD logging (comma-sep). request methods not in this list will have “BAD_METHOD” for the <verb> portion of the metric.
reveal_sensitive_prefix = 16

The X-Auth-Token is sensitive data. If revealed to an unauthorised person, they can now make requests against an account until the token expires. Set reveal_sensitive_prefix to the number of characters of the token that are logged. For example reveal_sensitive_prefix = 12 so only first 12 characters of the token are logged. Or, set to 0 to completely remove the token.

Note

reveal_sensitive_prefix will not affect the value logged with access_log_headers=True.

use = egg:swift#proxy_logging Entry point of paste.deploy in the server.
Description of configuration options for [filter-tempauth] in proxy-server.conf
Configuration option = Default value Description
allow_overrides = true This option allows middleware higher in the WSGI pipeline to override auth processing, useful for middleware such as tempurl and formpost. If you know you are not going to use such middleware and you want a bit of extra security, you can set this to False.
auth_prefix = /auth/ The HTTP request path prefix for the auth service. Swift itself reserves anything beginning with the letter.
require_group = The require_group parameter names a group that must be presented by either X-Auth-Token or X-Service-Token. Usually this parameter is used only with multiple reseller prefixes (for example, SERVICE_require_group=blah). By default, no group is needed. Do not use .admin.
reseller_prefix = AUTH The naming scope for the auth service.
set log_address = /dev/log Location where syslog sends the logs to.
set log_facility = LOG_LOCAL0 Syslog log facility.
set log_headers = false If True, log headers in each request.
set log_level = INFO Log level.
set log_name = tempauth Label to use when logging.
storage_url_scheme = default Scheme to return with storage urls: http, https, or default (chooses based on what the server is running as) This can be useful with an SSL load balancer in front of a non-SSL server.
token_life = 86400 The number of seconds a token is valid.
use = egg:swift#tempauth Entry point of paste.deploy in the server.
user_<account>_<user> = <key> [group] [group] [...] [storage_url]

List of all the accounts and user you want.

The following are example entries required for running the tests:

  • user_admin_admin = admin .admin .reseller_admin
  • user_test2_tester2 = testing2 .admin
  • user_test5_tester5 = testing5 service
  • user_test_tester = testing .admin
  • user_test_tester3 = testing3
Description of configuration options for [filter-xprofile] in proxy-server.conf
Configuration option = Default value Description
dump_interval = 5.0 The profile data will be dumped to local disk based on above naming rule in this interval (seconds).
dump_timestamp = false Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory.
flush_at_shutdown = false Clears the data when the wsgi server shutdowns.
log_filename_prefix = /tmp/log/swift/profile/default.profile This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/accoutn.profile.
path = /__profile__ This is the path of the URL to access the mini web UI.
profile_module = eventlet.green.profile This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc.
unwind = false Unwind the iterator of applications.
use = egg:swift#xprofile Entry point of paste.deploy in the server.
Sample proxy server configuration file
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 8080
# bind_timeout = 30
# backlog = 4096
# swift_dir = /etc/swift
# user = swift

# Enables exposing configuration settings via HTTP GET /info.
# expose_info = true

# Key to use for admin calls that are HMAC signed.  Default is empty,
# which will disable admin calls to /info.
# admin_key = secret_admin_key
#
# Allows the ability to withhold sections from showing up in the public calls
# to /info.  You can withhold subsections by separating the dict level with a
# ".".  The following would cause the sections 'container_quotas' and 'tempurl'
# to not be listed, and the key max_failed_deletes would be removed from
# bulk_delete.  Default value is 'swift.valid_api_versions' which allows all
# registered features to be listed via HTTP GET /info except
# swift.valid_api_versions information
# disallowed_sections = swift.valid_api_versions, container_quotas, tempurl

# Use an integer to override the number of pre-forked processes that will
# accept connections.  Should default to the number of effective cpu
# cores in the system.  It's worth noting that individual workers will
# use many eventlet co-routines to service multiple concurrent requests.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# Set the following two lines to enable SSL. This is for testing only.
# cert_file = /etc/swift/proxy.crt
# key_file = /etc/swift/proxy.key
#
# expiring_objects_container_divisor = 86400
# expiring_objects_account_name = expiring_objects
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_headers = false
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# This optional suffix (default is empty) that would be appended to the swift transaction
# id allows one to easily figure out from which cluster that X-Trans-Id belongs to.
# This is very useful when one is managing more than one swift cluster.
# trans_id_suffix =
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
# cors_allow_origin =
# strict_cors_mode = True
#
# client_timeout = 60
# eventlet_debug = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[pipeline:main]
# This sample pipeline uses tempauth and is used for SAIO dev work and
# testing. See below for a pipeline using keystone.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

# The following pipeline shows keystone integration. Comment out the one
# above and uncomment this one. Additional steps for integrating keystone are
# covered further below in the filter sections for authtoken and keystoneauth.
#pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystoneauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]
use = egg:swift#proxy
# You can override the default log routing for this app here:
# set log_name = proxy-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_address = /dev/log
#
# log_handoffs = true
# recheck_account_existence = 60
# recheck_container_existence = 60
# object_chunk_size = 65536
# client_chunk_size = 65536
#
# How long the proxy server will wait on responses from the a/c/o servers.
# node_timeout = 10
#
# How long the proxy server will wait for an initial response and to read a
# chunk of data from the object servers while serving GET / HEAD requests.
# Timeouts from these requests can be recovered from so setting this to
# something lower than node_timeout would provide quicker error recovery
# while allowing for a longer timeout for non-recoverable requests (PUTs).
# Defaults to node_timeout, should be overriden if node_timeout is set to a
# high number to prevent client timeouts from firing before the proxy server
# has a chance to retry.
# recoverable_node_timeout = node_timeout
#
# conn_timeout = 0.5
#
# How long to wait for requests to finish after a quorum has been established.
# post_quorum_timeout = 0.5
#
# How long without an error before a node's error count is reset. This will
# also be how long before a node is reenabled after suppression is triggered.
# error_suppression_interval = 60
#
# How many errors can accumulate before a node is temporarily ignored.
# error_suppression_limit = 10
#
# If set to 'true' any authorized user may create and delete accounts; if
# 'false' no one, even authorized, can.
# allow_account_management = false
#
# If set to 'true' authorized accounts that do not yet exist within the Swift
# cluster will be automatically created.
# account_autocreate = false
#
# If set to a positive value, trying to create a container when the account
# already has at least this maximum containers will result in a 403 Forbidden.
# Note: This is a soft limit, meaning a user might exceed the cap for
# recheck_account_existence before the 403s kick in.
# max_containers_per_account = 0
#
# This is a comma separated list of account hashes that ignore the
# max_containers_per_account cap.
# max_containers_whitelist =
#
# Comma separated list of Host headers to which the proxy will deny requests.
# deny_host_headers =
#
# Prefix used when automatically creating accounts.
# auto_create_account_prefix = .
#
# Depth of the proxy put queue.
# put_queue_depth = 10
#
# Storage nodes can be chosen at random (shuffle), by using timing
# measurements (timing), or by using an explicit match (affinity).
# Using timing measurements may allow for lower overall latency, while
# using affinity allows for finer control. In both the timing and
# affinity cases, equally-sorting nodes are still randomly chosen to
# spread load.
# The valid values for sorting_method are "affinity", "shuffle", or "timing".
# sorting_method = shuffle
#
# If the "timing" sorting_method is used, the timings will only be valid for
# the number of seconds configured by timing_expiry.
# timing_expiry = 300
#
# By default on a GET/HEAD swift will connect to a storage node one at a time
# in a single thread. There is smarts in the order they are hit however. If you
# turn on concurrent_gets below, then replica count threads will be used.
# With addition of the concurrency_timeout option this will allow swift to send
# out GET/HEAD requests to the storage nodes concurrently and answer with the
# first to respond. With an EC policy the parameter only affects HEAD requests.
# concurrent_gets = off
#
# This parameter controls how long to wait before firing off the next
# concurrent_get thread. A value of 0 would be fully concurrent, any other
# number will stagger the firing of the threads. This number should be
# between 0 and node_timeout. The default is what ever you set for the
# conn_timeout parameter.
# concurrency_timeout = 0.5
#
# Set to the number of nodes to contact for a normal request. You can use
# '* replicas' at the end to have it use the number given times the number of
# replicas for the ring being used for the request.
# request_node_count = 2 * replicas
#
# Which backend servers to prefer on reads. Format is r<N> for region
# N or r<N>z<M> for region N, zone M. The value after the equals is
# the priority; lower numbers are higher priority.
#
# Example: first read from region 1 zone 1, then region 1 zone 2, then
# anything in region 2, then everything else:
# read_affinity = r1z1=100, r1z2=200, r2=300
# Default is empty, meaning no preference.
# read_affinity =
#
# Which backend servers to prefer on writes. Format is r<N> for region
# N or r<N>z<M> for region N, zone M. If this is set, then when
# handling an object PUT request, some number (see setting
# write_affinity_node_count) of local backend servers will be tried
# before any nonlocal ones.
#
# Example: try to write to regions 1 and 2 before writing to any other
# nodes:
# write_affinity = r1, r2
# Default is empty, meaning no preference.
# write_affinity =
#
# The number of local (as governed by the write_affinity setting)
# nodes to attempt to contact first, before any non-local ones. You
# can use '* replicas' at the end to have it use the number given
# times the number of replicas for the ring being used for the
# request.
# write_affinity_node_count = 2 * replicas
#
# These are the headers whose values will only be shown to swift_owners. The
# exact definition of a swift_owner is up to the auth system in use, but
# usually indicates administrative responsibilities.
# swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =

[filter:tempauth]
use = egg:swift#tempauth
# You can override the default log routing for this filter here:
# set log_name = tempauth
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# The reseller prefix will verify a token begins with this prefix before even
# attempting to validate it. Also, with authorization, only Swift storage
# accounts with this prefix will be authorized by this middleware. Useful if
# multiple auth systems are in use for one Swift cluster.
# The reseller_prefix may contain a comma separated list of items. The first
# item is used for the token as mentioned above. If second and subsequent
# items exist, the middleware will handle authorization for an account with
# that prefix. For example, for prefixes "AUTH, SERVICE", a path of
# /v1/SERVICE_account is handled the same as /v1/AUTH_account. If an empty
# (blank) reseller prefix is required, it must be first in the list. Two
# single quote characters indicates an empty (blank) reseller prefix.
# reseller_prefix = AUTH

#
# The require_group parameter names a group that must be presented by
# either X-Auth-Token or X-Service-Token. Usually this parameter is
# used only with multiple reseller prefixes (e.g., SERVICE_require_group=blah).
# By default, no group is needed. Do not use .admin.
# require_group =

# The auth prefix will cause requests beginning with this prefix to be routed
# to the auth subsystem, for granting tokens, etc.
# auth_prefix = /auth/
# token_life = 86400
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# This specifies what scheme to return with storage urls:
# http, https, or default (chooses based on what the server is running as)
# This can be useful with an SSL load balancer in front of a non-SSL server.
# storage_url_scheme = default
#
# Lastly, you need to list all the accounts/users you want here. The format is:
#   user_<account>_<user> = <key> [group] [group] [...] [storage_url]
# or if you want underscores in <account> or <user>, you can base64 encode them
# (with no equal signs) and use this format:
#   user64_<account_b64>_<user_b64> = <key> [group] [group] [...] [storage_url]
# There are special groups of:
#   .reseller_admin = can do anything to any account for this auth
#   .admin = can do anything within the account
# If neither of these groups are specified, the user can only access containers
# that have been explicitly allowed for them by a .admin or .reseller_admin.
# The trailing optional storage_url allows you to specify an alternate url to
# hand back to the user upon authentication. If not specified, this defaults to
# $HOST/v1/<reseller_prefix>_<account> where $HOST will do its best to resolve
# to what the requester would need to use to reach this host.
# Here are example entries, required for running the tests:
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service

# To enable Keystone authentication you need to have the auth token
# middleware first to be configured. Here is an example below, please
# refer to the keystone's documentation for details about the
# different settings.
#
# You'll also need to have the keystoneauth middleware enabled and have it in
# your main pipeline, as show in the sample pipeline at the top of this file.
#
# Following parameters are known to work with keystonemiddleware v2.3.0
# (above v2.0.0), but checking the latest information in the wiki page[1]
# is recommended.
# 1. http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html#configuration
#
# [filter:authtoken]
# paste.filter_factory = keystonemiddleware.auth_token:filter_factory
# auth_uri = http://keystonehost:5000
# auth_url = http://keystonehost:35357
# auth_plugin = password
# project_domain_id = default
# user_domain_id = default
# project_name = service
# username = swift
# password = password
#
# delay_auth_decision defaults to False, but leaving it as false will
# prevent other auth systems, staticweb, tempurl, formpost, and ACLs from
# working. This value must be explicitly set to True.
# delay_auth_decision = False
#
# cache = swift.cache
# include_service_catalog = False
#
# [filter:keystoneauth]
# use = egg:swift#keystoneauth
# The reseller_prefix option lists account namespaces that this middleware is
# responsible for. The prefix is placed before the Keystone project id.
# For example, for project 12345678, and prefix AUTH, the account is
# named AUTH_12345678 (i.e., path is /v1/AUTH_12345678/...).
# Several prefixes are allowed by specifying a comma-separated list
# as in: "reseller_prefix = AUTH, SERVICE". The empty string indicates a
# single blank/empty prefix. If an empty prefix is required in a list of
# prefixes, a value of '' (two single quote characters) indicates a
# blank/empty prefix. Except for the blank/empty prefix, an underscore ('_')
# character is appended to the value unless already present.
# reseller_prefix = AUTH
#
# The user must have at least one role named by operator_roles on a
# project in order to create, delete and modify containers and objects
# and to set and read privileged headers such as ACLs.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_operator_roles applies to the /v1/SERVICE_<project>
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
# operator_roles = admin, swiftoperator
#
# The reseller admin role has the ability to create and delete accounts
# reseller_admin_role = ResellerAdmin
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# If the service_roles parameter is present, an X-Service-Token must be
# present in the request that when validated, grants at least one role listed
# in the parameter. The X-Service-Token may be scoped to any project.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_service_roles applies to the /v1/SERVICE_<project>
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
# By default, no service_roles are required.
# service_roles =
#
# For backwards compatibility, keystoneauth will match names in cross-tenant
# access control lists (ACLs) when both the requesting user and the tenant
# are in the default domain i.e the domain to which existing tenants are
# migrated. The default_domain_id value configured here should be the same as
# the value used during migration of tenants to keystone domains.
# default_domain_id = default
#
# For a new installation, or an installation in which keystone projects may
# move between domains, you should disable backwards compatible name matching
# in ACLs by setting allow_names_in_acls to false:
# allow_names_in_acls = true

[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
# This facility may be used to temporarily remove a Swift node from a load
# balancer pool during maintenance or upgrade (remove the file to allow the
# node back into the load balancer pool).
# disable_path =

[filter:cache]
use = egg:swift#memcache
# You can override the default log routing for this filter here:
# set log_name = cache
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# If not set here, the value for memcache_servers will be read from
# memcache.conf (see memcache.conf-sample) or lacking that file, it will
# default to the value below. You can specify multiple servers separated with
# commas, as in: 10.1.2.3:11211,10.1.2.4:11211 (IPv6 addresses must
# follow rfc3986 section-3.2.2, i.e. [::1]:11211)
# memcache_servers = 127.0.0.1:11211
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
# If not set here, the value for memcache_serialization_support will be read
# from /etc/swift/memcache.conf (see memcache.conf-sample).
# To avoid an instant full cache flush, existing installations should
# upgrade with 0, then set to 1 and reload, then after some time (24 hours)
# set to 2 and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support = 2
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections = 2
#
# More options documented in memcache.conf-sample

[filter:ratelimit]
use = egg:swift#ratelimit
# You can override the default log routing for this filter here:
# set log_name = ratelimit
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# clock_accuracy should represent how accurate the proxy servers' system clocks
# are with each other. 1000 means that all the proxies' clock are accurate to
# each other within 1 millisecond.  No ratelimit should be higher than the
# clock accuracy.
# clock_accuracy = 1000
#
# max_sleep_time_seconds = 60
#
# log_sleep_time_seconds of 0 means disabled
# log_sleep_time_seconds = 0
#
# allows for slow rates (e.g. running up to 5 sec's behind) to catch up.
# rate_buffer_seconds = 5
#
# account_ratelimit of 0 means disabled
# account_ratelimit = 0

# DEPRECATED- these will continue to work but will be replaced
# by the X-Account-Sysmeta-Global-Write-Ratelimit flag.
# Please see ratelimiting docs for details.
# these are comma separated lists of account names
# account_whitelist = a,b
# account_blacklist = c,d

# with container_limit_x = r
# for containers of size x limit write requests per second to r.  The container
# rate will be linearly interpolated from the values given. With the values
# below, a container of size 5 will get a rate of 75.
# container_ratelimit_0 = 100
# container_ratelimit_10 = 50
# container_ratelimit_50 = 20

# Similarly to the above container-level write limits, the following will limit
# container GET (listing) requests.
# container_listing_ratelimit_0 = 100
# container_listing_ratelimit_10 = 50
# container_listing_ratelimit_50 = 20

[filter:domain_remap]
use = egg:swift#domain_remap
# You can override the default log routing for this filter here:
# set log_name = domain_remap
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# storage_domain = example.com
# path_root = v1

# Browsers can convert a host header to lowercase, so check that reseller
# prefix on the account is the correct case. This is done by comparing the
# items in the reseller_prefixes config option to the found prefix. If they
# match except for case, the item from reseller_prefixes will be used
# instead of the found reseller prefix. When none match, the default reseller
# prefix is used. When no default reseller prefix is configured, any request
# with an account prefix not in that list will be ignored by this middleware.
# reseller_prefixes = AUTH
# default_reseller_prefix =

[filter:catch_errors]
use = egg:swift#catch_errors
# You can override the default log routing for this filter here:
# set log_name = catch_errors
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log

[filter:cname_lookup]
# Note: this middleware requires python-dnspython
use = egg:swift#cname_lookup
# You can override the default log routing for this filter here:
# set log_name = cname_lookup
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# Specify the storage_domain that match your cloud, multiple domains
# can be specified separated by a comma
# storage_domain = example.com
#
# lookup_depth = 1

# Note: Put staticweb just after your auth filter(s) in the pipeline
[filter:staticweb]
use = egg:swift#staticweb
# You can override the default log routing for this filter here:
# set log_name = staticweb
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log

# Note: Put tempurl before dlo, slo and your auth filter(s) in the pipeline
[filter:tempurl]
use = egg:swift#tempurl
# The methods allowed with Temp URLs.
# methods = GET HEAD PUT POST DELETE
#
# The headers to remove from incoming requests. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. incoming_allow_headers is a list of exceptions to these
# removals.
# incoming_remove_headers = x-timestamp
#
# The headers allowed as exceptions to incoming_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# incoming_allow_headers =
#
# The headers to remove from outgoing responses. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. outgoing_allow_headers is a list of exceptions to these
# removals.
# outgoing_remove_headers = x-object-meta-*
#
# The headers allowed as exceptions to outgoing_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# outgoing_allow_headers = x-object-meta-public-*

# Note: Put formpost just before your auth filter(s) in the pipeline
[filter:formpost]
use = egg:swift#formpost

# Note: Just needs to be placed before the proxy-server in the pipeline.
[filter:name_check]
use = egg:swift#name_check
# forbidden_chars = '"`<>
# maximum_length = 255
# forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$

[filter:list-endpoints]
use = egg:swift#list_endpoints
# list_endpoints_path = /endpoints/

[filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port = 514
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host =
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to 12, only the first 12 characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# 12 or so characters is unique enough that you can trace/debug
# token usage. Set to 0 to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix = 16
#
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the <verb> portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
#
# Note: The double proxy-logging in the pipeline is not a mistake. The
# left-most proxy-logging is there to log requests that were handled in
# middleware and never made it through to the right-most middleware (and
# proxy server). Double logging is prevented for normal requests. See
# proxy-logging docs.

# Note: Put before both ratelimit and auth in the pipeline.
[filter:bulk]
use = egg:swift#bulk
# max_containers_per_extraction = 10000
# max_failed_extractions = 1000
# max_deletes_per_request = 10000
# max_failed_deletes = 1000
#
# In order to keep a connection active during a potentially long bulk request,
# Swift may return whitespace prepended to the actual response body. This
# whitespace will be yielded no more than every yield_frequency seconds.
# yield_frequency = 10
#
# Note: The following parameter is used during a bulk delete of objects and
# their container. This would frequently fail because it is very likely
# that all replicated objects have not been deleted by the time the middleware got a
# successful response. It can be configured the number of retries. And the
# number of seconds to wait between each retry will be 1.5**retry
# delete_container_retry_count = 0
#
# To speed up the bulk delete process, multiple deletes may be executed in
# parallel. Avoid setting this too high, as it gives clients a force multiplier
# which may be used in DoS attacks. The suggested range is between 2 and 10.
# delete_concurrency = 2

# Note: Put after auth and staticweb in the pipeline.
[filter:slo]
use = egg:swift#slo
# max_manifest_segments = 1000
# max_manifest_size = 2097152
#
# Rate limiting applies only to segments smaller than this size (bytes).
# rate_limit_under_size = 1048576
#
# Start rate-limiting SLO segment serving after the Nth small segment of a
# segmented object.
# rate_limit_after_segment = 10
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. 0 means no rate-limiting.
# rate_limit_segments_per_sec = 1
#
# Time limit on GET requests (seconds)
# max_get_time = 86400
#
# When deleting with ?multipart-manifest=delete, multiple deletes may be
# executed in parallel. Avoid setting this too high, as it gives clients a
# force multiplier which may be used in DoS attacks. The suggested range is
# between 2 and 10.
# delete_concurrency = 2

# Note: Put after auth and staticweb in the pipeline.
# If you don't put it in the pipeline, it will be inserted for you.
[filter:dlo]
use = egg:swift#dlo
# Start rate-limiting DLO segment serving after the Nth segment of a
# segmented object.
# rate_limit_after_segment = 10
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. 0 means no rate-limiting.
# rate_limit_segments_per_sec = 1
#
# Time limit on GET requests (seconds)
# max_get_time = 86400

# Note: Put after auth in the pipeline.
[filter:container-quotas]
use = egg:swift#container_quotas

# Note: Put after auth in the pipeline.
[filter:account-quotas]
use = egg:swift#account_quotas

[filter:gatekeeper]
use = egg:swift#gatekeeper
# Set this to false if you want to allow clients to set arbitrary X-Timestamps
# on uploaded objects. This may be used to preserve timestamps when migrating
# from a previous storage system, but risks allowing users to upload
# difficult-to-delete data.
# shunt_inbound_x_timestamp = true
#
# You can override the default log routing for this filter here:
# set log_name = gatekeeper
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log

[filter:container_sync]
use = egg:swift#container_sync
# Set this to false if you want to disallow any full url values to be set for
# any new X-Container-Sync-To headers. This will keep any new full urls from
# coming in, but won't change any existing values already in the cluster.
# Updating those will have to be done manually, as knowing what the true realm
# endpoint should be cannot always be guessed.
# allow_full_urls = true
# Set this to specify this clusters //realm/cluster as "current" in /info
# current = //REALM/CLUSTER

# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after catch_errors, gatekeeper and healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file.  Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/proxy.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false

# Note: Put after slo, dlo in the pipeline.
# If you don't put it in the pipeline, it will be inserted automatically.
[filter:versioned_writes]
use = egg:swift#versioned_writes
# Enables using versioned writes middleware and exposing configuration
# settings via HTTP GET /info.
# WARNING: Setting this option bypasses the "allow_versions" option
# in the container configuration file, which will be eventually
# deprecated. See documentation for more details.
# allow_versioned_writes = false

# Note: Put after auth and before dlo and slo middlewares.
# If you don't put it in the pipeline, it will be inserted for you.
[filter:copy]
use = egg:swift#copy
# Set object_post_as_copy = false to turn on fast posts where only the metadata
# changes are stored anew and the original data file is kept in place. This
# makes for quicker posts.
# When object_post_as_copy is set to True, a POST request will be transformed
# into a COPY request where source and destination objects are the same.
# object_post_as_copy = true

# Note: To enable encryption, add the following 2 dependent pieces of crypto
# middleware to the proxy-server pipeline. They should be to the right of all
# other middleware apart from the final proxy-logging middleware, and in the
# order shown in this example:
# <other middleware> keymaster encryption proxy-logging proxy-server
[filter:keymaster]
use = egg:swift#keymaster

# Sets the root secret from which encryption keys are derived. This must be set
# before first use to a value that is a base64 encoding of at least 32 bytes.
# The security of all encrypted data critically depends on this key, therefore
# it should be set to a high-entropy value. For example, a suitable value may
# be obtained by base-64 encoding a 32 byte (or longer) value generated by a
# cryptographically secure random number generator. Changing the root secret is
# likely to result in data loss.
encryption_root_secret = changeme

# Sets the path from which the keymaster config options should be read. This
# allows multiple processes which need to be encryption-aware (for example,
# proxy-server and container-sync) to share the same config file, ensuring
# that the encryption keys used are the same. The format expected is similar
# to other config files, with a single [keymaster] section and a single
# encryption_root_secret option. If this option is set, the root secret
# MUST NOT be set in proxy-server.conf.
# keymaster_config_path =

[filter:encryption]
use = egg:swift#encryption

# By default all PUT or POST'ed object data and/or metadata will be encrypted.
# Encryption of new data and/or metadata may be disabled by setting
# disable_encryption to True. However, all encryption middleware should remain
# in the pipeline in order for existing encrypted data to be read.
# disable_encryption = False

Proxy server memcache configuration

You can find memcache configuration file examples for the proxy server at etc/memcache.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options for [memcache] in memcache.conf
Configuration option = Default value Description
connect_timeout = 0.3 Timeout in seconds (float) for connection.
io_timeout = 2.0 Timeout in seconds (float) for read and write.
memcache_max_connections = 2 Max number of connections to each memcached server per worker services.
memcache_serialization_support = 2 Sets how memcache values are serialized and deserialized.
memcache_servers = 127.0.0.1:11211 Comma-separated list of memcached servers ip:port services.
pool_timeout = 1.0 Timeout in seconds (float) for pooled connection.
tries = 3 Number of servers to retry on failures getting a pooled connection.
Sample proxy server configuration file
[memcache]
# You can use this single conf file instead of having memcache_servers set in
# several other conf files under [filter:cache] for example. You can specify
# multiple servers separated with commas, as in: 10.1.2.3:11211,10.1.2.4:11211
# (IPv6 addresses must follow rfc3986 section-3.2.2, i.e. [::1]:11211)
# memcache_servers = 127.0.0.1:11211
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
# To avoid an instant full cache flush, existing installations should
# upgrade with 0, then set to 1 and reload, then after some time (24 hours)
# set to 2 and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support = 2
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections = 2
#
# Timeout for connection
# connect_timeout = 0.3
# Timeout for pooled connection
# pool_timeout = 1.0
# number of servers to retry on failures getting a pooled connection
# tries = 3
# Timeout for read and writes
# io_timeout = 2.0

Rsyncd configuration

Find an example rsyncd configuration at etc/rsyncd.conf-sample in the source code repository.

The available configuration options are:

Description of configuration options in rsyncd.conf
Configuration option = Default value Description
gid = swift Group ID for rsyncd.
log file = /var/log/rsyncd.log Log file for rsyncd.
pid file = /var/run/rsyncd.pid PID file for rsyncd.
uid = swift User ID for rsyncd.
max connections = Maximum number of connections for rsyncd. This option should be set for each account, container, or object.
path = /srv/node Working directory for rsyncd to use. This option should be set for each account, container, or object.
read only = false Set read only. This option should be set for each account, container, or object.
lock file = Lock file for rsyncd. This option should be set for each account, container, or object.

If rsync_module includes the device, you can tune rsyncd to permit 4 connections per device instead of simply allowing 8 connections for all devices:

rsync_module = {replication_ip}::object_{device}

If devices in your object ring are named sda, sdb, and sdc:

[object_sda]
max connections = 4
path = /srv/node
read only = false
lock file = /var/lock/object_sda.lock

[object_sdb]
max connections = 4
path = /srv/node
read only = false
lock file = /var/lock/object_sdb.lock

[object_sdc]
max connections = 4
path = /srv/node
read only = false
lock file = /var/lock/object_sdc.lock

To emulate the deprecated vm_test_mode = yes option, set:

rsync_module = {replication_ip}::object{replication_port}

Therefore, on your SAIO, you have to set the following rsyncd configuration:

[object6010]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/object6010.lock

[object6020]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/object6020.lock

[object6030]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/object6030.lock

[object6040]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/object6040.lock

Configure Object Storage features

Object Storage zones

In OpenStack Object Storage, data is placed across different tiers of failure domains. First, data is spread across regions, then zones, then servers, and finally across drives. Data is placed to get the highest failure domain isolation. If you deploy multiple regions, the Object Storage service places the data across the regions. Within a region, each replica of the data should be stored in unique zones, if possible. If there is only one zone, data should be placed on different servers. And if there is only one server, data should be placed on different drives.

Regions are widely separated installations with a high-latency or otherwise constrained network link between them. Zones are arbitrarily assigned, and it is up to the administrator of the Object Storage cluster to choose an isolation level and attempt to maintain the isolation level through appropriate zone assignment. For example, a zone may be defined as a rack with a single power source. Or a zone may be a DC room with a common utility provider. Servers are identified by a unique IP/port. Drives are locally attached storage volumes identified by mount point.

In small clusters (five nodes or fewer), everything is normally in a single zone. Larger Object Storage deployments may assign zone designations differently; for example, an entire cabinet or rack of servers may be designated as a single zone to maintain replica availability if the cabinet becomes unavailable (for example, due to failure of the top of rack switches or a dedicated circuit). In very large deployments, such as service provider level deployments, each zone might have an entirely autonomous switching and power infrastructure, so that even the loss of an electrical circuit or switching aggregator would result in the loss of a single replica at most.

Rackspace zone recommendations

For ease of maintenance on OpenStack Object Storage, Rackspace recommends that you set up at least five nodes. Each node is assigned its own zone (for a total of five zones), which gives you host level redundancy. This enables you to take down a single zone for maintenance and still guarantee object availability in the event that another zone fails during your maintenance.

You could keep each server in its own cabinet to achieve cabinet level isolation, but you may wish to wait until your Object Storage service is better established before developing cabinet-level isolation. OpenStack Object Storage is flexible; if you later decide to change the isolation level, you can take down one zone at a time and move them to appropriate new homes.

RAID controller configuration

OpenStack Object Storage does not require RAID. In fact, most RAID configurations cause significant performance degradation. The main reason for using a RAID controller is the battery-backed cache. It is very important for data integrity reasons that when the operating system confirms a write has been committed that the write has actually been committed to a persistent location. Most disks lie about hardware commits by default, instead writing to a faster write cache for performance reasons. In most cases, that write cache exists only in non-persistent memory. In the case of a loss of power, this data may never actually get committed to disk, resulting in discrepancies that the underlying file system must handle.

OpenStack Object Storage works best on the XFS file system, and this document assumes that the hardware being used is configured appropriately to be mounted with the nobarriers option. For more information, see the XFS FAQ.

To get the most out of your hardware, it is essential that every disk used in OpenStack Object Storage is configured as a standalone, individual RAID 0 disk; in the case of 6 disks, you would have six RAID 0s or one JBOD. Some RAID controllers do not support JBOD or do not support battery backed cache with JBOD. To ensure the integrity of your data, you must ensure that the individual drive caches are disabled and the battery backed cache in your RAID card is configured and used. Failure to configure the controller properly in this case puts data at risk in the case of sudden loss of power.

You can also use hybrid drives or similar options for battery backed up cache configurations without a RAID controller.

Throttle resources through rate limits

Rate limiting in OpenStack Object Storage is implemented as a pluggable middleware that you configure on the proxy server. Rate limiting is performed on requests that result in database writes to the account and container SQLite databases. It uses memcached and is dependent on the proxy servers having highly synchronized time. The rate limits are limited by the accuracy of the proxy server clocks.

Configure rate limiting

All configuration is optional. If no account or container limits are provided, no rate limiting occurs. Available configuration options include:

Description of configuration options for [filter-ratelimit] in proxy-server.conf
Configuration option = Default value Description
account_blacklist = c,d Comma separated lists of account names that will not be allowed. Returns a 497 response. r: for containers of size x, limit requests per second to r. Will limit PUT, DELETE, and POST requests to /a/c/o. container_listing_ratelimit_x = r: for containers of size x, limit listing requests per second to r. Will limit GET requests to /a/c.
account_ratelimit = 0 If set, will limit PUT and DELETE requests to /account_name/container_name. Number is in requests per second.
account_whitelist = a,b Comma separated lists of account names that will not be rate limited.
clock_accuracy = 1000 Represents how accurate the proxy servers’ system clocks are with each other. 1000 means that all the proxies’ clock are accurate to each other within 1 millisecond. No ratelimit should be higher than the clock accuracy.
container_listing_ratelimit_0 = 100 with container_listing_ratelimit_x = r, for containers of size x, limit container GET (listing) requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75.
container_listing_ratelimit_10 = 50 with container_listing_ratelimit_x = r, for containers of size x, limit container GET (listing) requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75.
container_listing_ratelimit_50 = 20 with container_listing_ratelimit_x = r, for containers of size x, limit container GET (listing) requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75.
container_ratelimit_0 = 100 with container_ratelimit_x = r, for containers of size x, limit write requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75.
container_ratelimit_10 = 50 with container_ratelimit_x = r, for containers of size x, limit write requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75.
container_ratelimit_50 = 20 with container_ratelimit_x = r, for containers of size x, limit write requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75.
log_sleep_time_seconds = 0 To allow visibility into rate limiting set this value > 0 and all sleeps greater than the number will be logged.
max_sleep_time_seconds = 60 App will immediately return a 498 response if the necessary sleep time ever exceeds the given max_sleep_time_seconds.
rate_buffer_seconds = 5 Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy.
set log_address = /dev/log Location where syslog sends the logs to.
set log_facility = LOG_LOCAL0 Syslog log facility.
set log_headers = false If True, log headers in each request.
set log_level = INFO Log level.
set log_name = ratelimit Label to use when logging.
use = egg:swift#ratelimit Entry point of paste.deploy in the server.

The container rate limits are linearly interpolated from the values given. A sample container rate limiting could be:

container_ratelimit_100 = 100
container_ratelimit_200 = 50
container_ratelimit_500 = 20

This would result in:

Values for Rate Limiting with Sample Configuration Settings
Container Size Rate Limit
0-99 No limiting
100 100
150 75
500 20
1000 20
Health check

Provides an easy way to monitor whether the Object Storage proxy server is alive. If you access the proxy with the path /healthcheck, it responds with OK in the response body, which monitoring tools can use.

Description of configuration options for [filter-healthcheck] in account-server.conf
Configuration option = Default value Description
disable_path = An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE”
use = egg:swift#healthcheck Entry point of paste.deploy in the server
Domain remap

Middleware that translates container and account parts of a domain to path parameters that the proxy server understands.

Description of configuration options for [filter-domain_remap] in proxy-server.conf
Configuration option = Default value Description
default_reseller_prefix = If the reseller prefixes do not match, the default reseller prefix is used. When no default reseller prefix is configured, any request with an account prefix not in that list will be ignored by this middleware.
path_root = v1 Root path.
reseller_prefixes = AUTH Browsers can convert a host header to lowercase, so check that reseller prefix on the account is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix.
set log_address = /dev/log Location where syslog sends the logs to.
set log_facility = LOG_LOCAL0 Syslog log facility.
set log_headers = false If True, log headers in each request.
set log_level = INFO Log level.
set log_name = domain_remap Label to use when logging.
storage_domain = example.com Domain that matches your cloud. Multiple domains can be specified using a comma-separated list.
use = egg:swift#domain_remap Entry point of paste.deploy in the server.
CNAME lookup

Middleware that translates an unknown domain in the host header to something that ends with the configured storage_domain by looking up the given domain’s CNAME record in DNS.

Description of configuration options for [filter-cname_lookup] in proxy-server.conf
Configuration option = Default value Description
lookup_depth = 1 Because CNAMES can be recursive, specifies the number of levels through which to search.
set log_address = /dev/log Location where syslog sends the logs to
set log_facility = LOG_LOCAL0 Syslog log facility
set log_headers = false If True, log headers in each request
set log_level = INFO Log level
set log_name = cname_lookup Label to use when logging
storage_domain = example.com Domain that matches your cloud. Multiple domains can be specified using a comma-separated list.
use = egg:swift#cname_lookup Entry point of paste.deploy in the server
Temporary URL

Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in OpenStack Object Storage, but the Object Storage account has no public access. The website can generate a URL that provides GET access for a limited time to the resource. When the web browser user clicks on the link, the browser downloads the object directly from Object Storage, eliminating the need for the website to act as a proxy for the request. If the user shares the link with all his friends, or accidentally posts it on a forum, the direct access is limited to the expiration time set when the website created the link.

A temporary URL is the typical URL associated with an object, with two additional query parameters:

temp_url_sig
A cryptographic signature.
temp_url_expires
An expiration date, in Unix time.

An example of a temporary URL:

https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485

To create temporary URLs, first set the X-Account-Meta-Temp-URL-Key header on your Object Storage account to an arbitrary string. This string serves as a secret key. For example, to set a key of b3968d0207b54ece87cccc06515a89d4 by using the swift command-line tool:

$ swift post -m "Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4"

Next, generate an HMAC-SHA1 (RFC 2104) signature to specify:

  • Which HTTP method to allow (typically GET or PUT).
  • The expiry date as a Unix timestamp.
  • The full path to the object.
  • The secret key set as the X-Account-Meta-Temp-URL-Key.

Here is code generating the signature for a GET for 24 hours on /v1/AUTH_account/container/object:

import hmac
from hashlib import sha1
from time import time
method = 'GET'
duration_in_seconds = 60*60*24
expires = int(time() + duration_in_seconds)
path = '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object'
key = 'mykey'
hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = hmac.new(key, hmac_body, sha1).hexdigest()
s = 'https://{host}/{path}?temp_url_sig={sig}&temp_url_expires={expires}'
url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires)

Any alteration of the resource path or query arguments results in a 401 Unauthorized error. Similarly, a PUT where GET was the allowed method returns a 401 error. HEAD is allowed if GET or PUT is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Object Storage.

Note

Changing the X-Account-Meta-Temp-URL-Key invalidates any previously generated temporary URLs within 60 seconds, which is the memcache time for the key. Object Storage supports up to two keys, specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2. Signatures are checked against both keys, if present. This process enables key rotation without invalidating all existing temporary URLs.

Object Storage includes the swift-temp-url script that generates the query parameters automatically:

$ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey\
/v1/AUTH_account/container/object?\
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&\
temp_url_expires=1374497657

Because this command only returns the path, you must prefix the Object Storage host name (for example, https://swift-cluster.example.com).

With GET Temporary URLs, a Content-Disposition header is set on the response so that browsers interpret this as a file attachment to be saved. The file name chosen is based on the object name, but you can override this with a filename query parameter. The following example specifies a filename of My Test File.pdf:

https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485&
filename=My+Test+File.pdf

If you do not want the object to be downloaded, you can cause Content-Disposition: inline to be set on the response by adding the inline parameter to the query string, as follows:

https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485&inline

To enable Temporary URL functionality, edit /etc/swift/proxy-server.conf to add tempurl to the pipeline variable defined in the [pipeline:main] section. The tempurl entry should appear immediately before the authentication filters in the pipeline, such as authtoken, tempauth or keystoneauth. For example:

[pipeline:main]
pipeline = healthcheck cache tempurl authtoken keystoneauth proxy-server
Description of configuration options for [filter-tempurl] in proxy-server.conf
Configuration option = Default value Description
incoming_allow_headers = Headers allowed as exceptions to incoming_remove_headers. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match.
incoming_remove_headers = x-timestamp Headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match.
methods = GET HEAD PUT POST DELETE HTTP methods allowed with Temporary URLs.
outgoing_allow_headers = x-object-meta-public-* Headers allowed as exceptions to outgoing_allow_headers. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match.
outgoing_remove_headers = x-object-meta-* Headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match.
use = egg:swift#tempurl Entry point of paste.deploy in the server.
Name check filter

Name Check is a filter that disallows any paths that contain defined forbidden characters or that exceed a defined length.

Description of configuration options for [filter-name_check] in proxy-server.conf
Configuration option = Default value Description
forbidden_chars = '"`<> Characters that are not allowed in a name
forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$ Substrings to forbid, using regular expression syntax
maximum_length = 255 Maximum length of a name
use = egg:swift#name_check Entry point of paste.deploy in the server
Constraints

To change the OpenStack Object Storage internal limits, update the values in the swift-constraints section in the swift.conf file. Use caution when you update these values because they affect the performance in the entire cluster.

Description of configuration options for [swift-constraints] in swift.conf
Configuration option = Default value Description
account_listing_limit = 10000 The default (and maximum) number of items returned for an account listing request.
container_listing_limit = 10000 The default (and maximum) number of items returned for a container listing request.
extra_header_count = 0 By default the maximum number of allowed headers depends on the number of max allowed metadata settings plus a default value of 32 for regular http headers. If for some reason this is not enough (custom middleware for example) it can be increased with the extra_header_count constraint.
max_account_name_length = 256 The maximum number of bytes in the utf8 encoding of an account name.
max_container_name_length = 256 The maximum number of bytes in the utf8 encoding of a container name.
max_file_size = 5368709122 The largest normal object that can be saved in the cluster. This is also the limit on the size of each segment of a large object when using the large object manifest support. This value is set in bytes. Setting it to lower than 1MiB will cause some tests to fail. It is STRONGLY recommended to leave this value at the default (5 * 2**30 + 2).
max_header_size = 8192 The max number of bytes in the utf8 encoding of each header. Using 8192 as default because eventlet use 8192 as maximum size of header line. You may need to increase this value when using identity v3 API tokens including more than 7 catalog entries. See also include_service_catalog in proxy-server.conf-sample (documented in overview_auth.rst).
max_meta_count = 90 The max number of metadata keys that can be stored on a single account, container, or object.
max_meta_name_length = 128 The max number of bytes in the utf8 encoding of the name portion of a metadata header.
max_meta_overall_size = 4096 The max number of bytes in the utf8 encoding of the metadata (keys + values).
max_meta_value_length = 256 The max number of bytes in the utf8 encoding of a metadata value.
max_object_name_length = 1024 The max number of bytes in the utf8 encoding of an object name.
valid_api_versions = v0,v1,v2 No help text available for this option.
Cluster health

Use the swift-dispersion-report tool to measure overall cluster health. This tool checks if a set of deliberately distributed containers and objects are currently in their proper places within the cluster. For instance, a common deployment has three replicas of each object. The health of that object can be measured by checking if each replica is in its proper place. If only 2 of the 3 is in place the object’s health can be said to be at 66.66%, where 100% would be perfect. A single object’s health, especially an older object, usually reflects the health of that entire partition the object is in. If you make enough objects on a distinct percentage of the partitions in the cluster,you get a good estimate of the overall cluster health.

In practice, about 1% partition coverage seems to balance well between accuracy and the amount of time it takes to gather results. To provide this health value, you must create an account solely for this usage. Next, you must place the containers and objects throughout the system so that they are on distinct partitions. Use the swift-dispersion-populate tool to create random container and object names until they fall on distinct partitions.

Last, and repeatedly for the life of the cluster, you must run the swift-dispersion-report tool to check the health of each container and object.

These tools must have direct access to the entire cluster and ring files. Installing them on a proxy server suffices.

The swift-dispersion-populate and swift-dispersion-report commands both use the same /etc/swift/dispersion.conf configuration file. Example dispersion.conf file:

[dispersion]
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_key = testing

You can use configuration options to specify the dispersion coverage, which defaults to 1%, retries, concurrency, and so on. However, the defaults are usually fine. After the configuration is in place, run the swift-dispersion-populate tool to populate the containers and objects throughout the cluster. Now that those containers and objects are in place, you can run the swift-dispersion-report tool to get a dispersion report or view the overall health of the cluster. Here is an example of a cluster in perfect health:

$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space

Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space

Now, deliberately double the weight of a device in the object ring (with replication turned off) and re-run the dispersion report to show what impact that has:

$ swift-ring-builder object.builder set_weight d0 200
$ swift-ring-builder object.builder rebalance
...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 8s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space

Queried 2619 objects for dispersion reporting, 7s, 0 retries
There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space

You can see the health of the objects in the cluster has gone down significantly. Of course, this test environment has just four devices, in a production environment with many devices the impact of one device change is much less. Next, run the replicators to get everything put back into place and then rerun the dispersion report:

# start object replicators and monitor logs until they're caught up ...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space

Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space

Alternatively, the dispersion report can also be output in JSON format. This allows it to be more easily consumed by third-party utilities:

$ swift-dispersion-report -j
{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}
Description of configuration options for [dispersion] in dispersion.conf
Configuration option = Default value Description
auth_key = testing No help text available for this option.
auth_url = http://localhost:8080/auth/v1.0 Endpoint for auth server, such as keystone
auth_user = test:tester Default user for dispersion in this context
auth_version = 1.0 Indicates which version of auth
concurrency = 25 Number of replication workers to spawn
container_populate = yes No help text available for this option.
container_report = yes No help text available for this option.
dispersion_coverage = 1.0 No help text available for this option.
dump_json = no No help text available for this option.
endpoint_type = publicURL Indicates whether endpoint for auth is public or internal
keystone_api_insecure = no Allow accessing insecure keystone server. The keystone’s certificate will not be verified.
object_populate = yes No help text available for this option.
object_report = yes No help text available for this option.
project_domain_name = project_domain No help text available for this option.
project_name = project No help text available for this option.
retries = 5 No help text available for this option.
swift_dir = /etc/swift Swift configuration directory
user_domain_name = user_domain No help text available for this option.
Static Large Object (SLO) support

This feature is very similar to Dynamic Large Object (DLO) support in that it enables the user to upload many objects concurrently and afterwards download them as a single object. It is different in that it does not rely on eventually consistent container listings to do so. Instead, a user-defined manifest of the object segments is used.

For more information regarding SLO usage and support, please see: Static Large Objects.

Description of configuration options for [filter-slo] in proxy-server.conf
Configuration option = Default value Description
max_get_time = 86400 Time limit on GET requests (seconds)
max_manifest_segments = 1000 Maximum number of segments.
max_manifest_size = 2097152 Maximum size of segments.
min_segment_size = 1048576 Minimum size of segments.
rate_limit_after_segment = 10 Rate limit the download of large object segments after this segment is downloaded.
rate_limit_segments_per_sec = 0 Rate limit large object downloads at this rate. contact for a normal request. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request. paste.deploy to use for auth. To use tempauth set to:
use = egg:swift#slo Entry point of paste.deploy in the server.
Container quotas

The container_quotas middleware implements simple quotas that can be imposed on Object Storage containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope of containers that are delegated to non-admin users, exposed to form POST uploads, or just as a self-imposed sanity check.

Any object PUT operations that exceed these quotas return a Forbidden (403) status code.

Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second TTL by default), and it is unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers are refused).

Set quotas by adding meta values to the container. These values are validated when you set them:

X-Container-Meta-Quota-Bytes
Maximum size of the container, in bytes.
X-Container-Meta-Quota-Count
Maximum object count of the container.
Account quotas

The x-account-meta-quota-bytes metadata entry must be requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed.

The x-account-meta-quota-bytes metadata entry must be set to store and enable the quota. Write requests to this metadata entry are only permitted for resellers. There is no account quota limitation on a reseller account even if x-account-meta-quota-bytes is set.

Any object PUT operations that exceed the quota return a 413 response (request entity too large) with a descriptive body.

The following command uses an admin account that owns the Reseller role to set a quota on the test account:

$ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000

Here is the stat listing of an account where quota has been set:

$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
Account: AUTH_test
Containers: 0
Objects: 0
Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a

This command removes the account quota:

$ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
  --os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:
Bulk delete

Use bulk-delete to delete multiple files from an account with a single request. Responds to DELETE requests with a header ‘X-Bulk-Delete: true_value’. The body of the DELETE request is a new line-separated list of files to delete. The files listed must be URL encoded and in the form:

/container_name/obj_name

If all files are successfully deleted (or did not exist), the operation returns HTTPOk. If any files failed to delete, the operation returns HTTPBadGateway. In both cases, the response body is a JSON dictionary that shows the number of files that were successfully deleted or not found. The files that failed are listed.

Description of configuration options for [filter-bulk] in proxy-server.conf
Configuration option = Default value Description
delete_container_retry_count = 0 The parameter is used during a bulk delete of objects and their container. This would frequently fail because it is very likely that all replicated objects have not been deleted by the time the middleware got a successful response. It can be configured the number of retries. And the number of seconds to wait between each retry will be 1.5**retry.
max_containers_per_extraction = 10000 The maximum numbers of containers per extraction.
max_deletes_per_request = 10000 The maximum numbers of deletion per request.
max_failed_deletes = 1000 The maximum number of tries to delete before failure.
max_failed_extractions = 1000 The maximum number of tries to extract before failure.
use = egg:swift#bulk Entry point of paste.deploy in the server.
yield_frequency = 10 In order to keep a connection active during a potentially long bulk request, Swift may return whitespace prepended to the actual response body. This whitespace will be yielded no more than every yield_frequency seconds.
Drive audit

The swift-drive-audit configuration items reference a script that can be run by using cron to watch for bad drives. If errors are detected, it unmounts the bad drive so that OpenStack Object Storage can work around it. It takes the following options:

Description of configuration options for [drive-audit] in drive-audit.conf
Configuration option = Default value Description
device_dir = /srv/node Directory devices are mounted under
error_limit = 1 Number of errors to find before a device is unmounted
log_address = /dev/log Location where syslog sends the logs to
log_facility = LOG_LOCAL0 Syslog log facility
log_file_pattern = /var/log/kern.*[!.][!g][!z] Location of the log file with globbing pattern to check against device errors locate device blocks with errors in the log file
log_level = INFO Logging level
log_max_line_length = 0 Caps the length of log lines to the value given; no limit if set to 0, the default.
log_name = drive-audit Label used when logging
log_to_console = False No help text available for this option.
minutes = 60 Number of minutes to look back in
recon_cache_path = /var/cache/swift Directory where stats for a few items will be stored
regex_pattern_1 = \berror\b.*\b(dm-[0-9]{1,2}\d?)\b No help text available for this option.
unmount_failed_device = True No help text available for this option.
Form post

Middleware that enables you to upload objects to a cluster by using an HTML form POST.

The format of the form is:

<form action="<swift-url>" method="POST"
      enctype="multipart/form-data">
  <input type="hidden" name="redirect" value="<redirect-url>" />
  <input type="hidden" name="max_file_size" value="<bytes>" />
  <input type="hidden" name="max_file_count" value="<count>" />
  <input type="hidden" name="expires" value="<unix-timestamp>" />
  <input type="hidden" name="signature" value="<hmac>" />
  <input type="hidden" name="x_delete_at" value="<unix-timestamp>"/>
  <input type="hidden" name="x_delete_after" value="<seconds>"/>
  <input type="file" name="file1" /><br />
  <input type="submit" />
</form>

In the form:

action="<swift-url>"

The URL to the Object Storage destination, such as https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix.

The name of each uploaded file is appended to the specified swift-url. So, you can upload directly to the root of container with a URL like https://swift-cluster.example.com/v1/AUTH_account/container/.

Optionally, you can include an object prefix to separate different users’ uploads, such as https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix.

method="POST"
The form method must be POST.
enctype="multipart/form-data
The enctype must be set to multipart/form-data.
name="redirect"
The URL to which to redirect the browser after the upload completes. The URL has status and message query parameters added to it that indicate the HTTP status code for the upload and, optionally, additional error information. The 2nn status code indicates success. If an error occurs, the URL might include error information, such as "max_file_size exceeded".
name="max_file_size"
Required. The maximum number of bytes that can be uploaded in a single file upload.
name="max_file_count"
Required. The maximum number of files that can be uploaded with the form.
name="expires"

The expiration date and time for the form in UNIX Epoch time stamp format. After this date and time, the form is no longer valid.

For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT.

name="signature"

The HMAC-SHA1 signature of the form. This sample Python code shows how to compute the signature:

import hmac
from hashlib import sha1
from time import time
path = '/v1/account/container/object_prefix'
redirect = 'https://myserver.com/some-page'
max_file_size = 104857600
max_file_count = 10
expires = int(time() + 600)
key = 'mykey'
hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
    max_file_size, max_file_count, expires)
signature = hmac.new(key, hmac_body, sha1).hexdigest()

The key is the value of the X-Account-Meta-Temp-URL-Key header on the account.

Use the full path from the /v1/ value and onward.

During testing, you can use the swift-form-signature command-line tool to compute the expires and signature values.

name="x_delete_at"

The date and time in UNIX Epoch time stamp format when the object will be removed.

For example, 1440619048 is equivalent to Mon, Wed, 26 Aug 2015 19:57:28 GMT.

This attribute enables you to specify the X-Delete- At header value in the form POST.

name="x_delete_after"
The number of seconds after which the object is removed. Internally, the Object Storage system stores this value in the X-Delete-At metadata item. This attribute enables you to specify the X-Delete-After header value in the form POST.
type="file" name="filexx"
Optional. One or more files to upload. Must appear after the other attributes to be processed correctly. If attributes come after the file attribute, they are not sent with the sub- request because on the server side, all attributes in the file cannot be parsed unless the whole file is read into memory and the server does not have enough memory to service these requests. So, attributes that follow the file attribute are ignored.
Description of configuration options for [filter-formpost] in proxy-server.conf
Configuration option = Default value Description
use = egg:swift#formpost Entry point of paste.deploy in the server
Static web sites

When configured, this middleware serves container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests.

Description of configuration options for [filter-staticweb] in proxy-server.conf
Configuration option = Default value Description
use = egg:swift#staticweb Entry point of paste.deploy in the server

Configure Object Storage with the S3 API

The Swift3 middleware emulates the S3 REST API on top of Object Storage.

The following operations are currently supported:

  • GET Service
  • DELETE Bucket
  • GET Bucket (List Objects)
  • PUT Bucket
  • DELETE Object
  • GET Object
  • HEAD Object
  • PUT Object
  • PUT Object (Copy)

To use this middleware, first download the latest version from its repository to your proxy servers.

$ git clone https://git.openstack.org/openstack/swift3

Then, install it using standard python mechanisms, such as:

# python setup.py install

Alternatively, if you have configured the Ubuntu Cloud Archive, you may use:

# apt-get install swift-plugin-s3

To add this middleware to your configuration, add the swift3 middleware in front of the swauth middleware, and before any other middleware that looks at Object Storage requests (like rate limiting).

Ensure that your proxy-server.conf file contains swift3 in the pipeline and the [filter:swift3] section, as shown below:

[pipeline:main]
pipeline = catch_errors healthcheck cache swift3 swauth proxy-server

[filter:swift3]
use = egg:swift3#swift3

Next, configure the tool that you use to connect to the S3 API. For S3curl, for example, you must add your host IP information by adding your host IP to the @endpoints array (line 33 in s3curl.pl):

my @endpoints = ( '1.2.3.4');

Now you can send commands to the endpoint, such as:

$ ./s3curl.pl - 'a7811544507ebaf6c9a7a8804f47ea1c' \
  -key 'a7d8e981-e296-d2ba-cb3b-db7dd23159bd' \
  -get - -s -v http://1.2.3.4:8080

To set up your client, ensure you are using the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard. The host should also point to the Object Storage node’s hostname. It also will have to use the old-style calling format, and not the hostname-based container format. Here is an example client setup using the Python boto library on a locally installed all-in-one Object Storage installation.

connection = boto.s3.Connection(
    aws_access_key_id='a7811544507ebaf6c9a7a8804f47ea1c',
    aws_secret_access_key='a7d8e981-e296-d2ba-cb3b-db7dd23159bd',
    port=8080,
    host='127.0.0.1',
    is_secure=False,
    calling_format=boto.s3.connection.OrdinaryCallingFormat())

Endpoint listing middleware

The endpoint listing middleware enables third-party services that use data locality information to integrate with OpenStack Object Storage. This middleware reduces network overhead and is designed for third-party services that run inside the firewall. Deploy this middleware on a proxy server because usage of this middleware is not authenticated.

Format requests for endpoints, as follows:

/endpoints/{account}/{container}/{object}
/endpoints/{account}/{container}
/endpoints/{account}

Use the list_endpoints_path configuration option in the proxy_server.conf file to customize the /endpoints/ path.

Responses are JSON-encoded lists of endpoints, as follows:

http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj}
http://{server}:{port}/{dev}/{part}/{acc}/{cont}
http://{server}:{port}/{dev}/{part}/{acc}

An example response is:

http://10.1.1.1:6000/sda1/2/a/c2/o1
http://10.1.1.1:6000/sda1/2/a/c2
http://10.1.1.1:6000/sda1/2/a

Object storage log files

The Object Storage sends logs to the system logging facility only. By default, all Object Storage log files to /var/log/swift/swift.log, using the local0, local1, and local2 syslog facilities.

New, updated, and deprecated options in Newton for OpenStack Object Storage

There are no new, updated, and deprecated options in Mitaka for OpenStack Object Storage.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Orchestration service

Orchestration API configuration

Configuration options

The following options allow configuration of the APIs that Orchestration supports. Currently this includes compatibility APIs for CloudFormation and CloudWatch and a native API.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
action_retry_limit = 5 (Integer) Number of times to retry to bring a resource to a non-error state. Set to 0 to disable retries.
enable_stack_abandon = False (Boolean) Enable the preview Stack Abandon feature.
enable_stack_adopt = False (Boolean) Enable the preview Stack Adopt feature.
encrypt_parameters_and_properties = False (Boolean) Encrypt template parameters that were marked as hidden and also all the resource properties before storing them in database.
heat_metadata_server_url = None (String) URL of the Heat metadata server. NOTE: Setting this is only needed if you require instances to use a different endpoint than in the keystone catalog
heat_stack_user_role = heat_stack_user (String) Keystone role for heat template-defined users.
heat_waitcondition_server_url = None (String) URL of the Heat waitcondition server.
heat_watch_server_url = (String) URL of the Heat CloudWatch server.
hidden_stack_tags = data-processing-cluster (List) Stacks containing these tag names will be hidden. Multiple tags should be given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too).
max_json_body_size = 1048576 (Integer) Maximum raw byte size of JSON request body. Should be larger than max_template_size.
num_engine_workers = None (Integer) Number of heat-engine processes to fork and run. Will default to either to 4 or number of CPUs on the host, whichever is greater.
observe_on_update = False (Boolean) On update, enables heat to collect existing resource properties from reality and converge to updated template.
stack_action_timeout = 3600 (Integer) Timeout in seconds for stack action (ie. create or update).
stack_domain_admin = None (String) Keystone username, a user with roles sufficient to manage users and projects in the stack_user_domain.
stack_domain_admin_password = None (String) Keystone password for stack_domain_admin user.
stack_scheduler_hints = False (Boolean) When this feature is enabled, scheduler hints identifying the heat stack context of a server or volume resource are passed to the configured schedulers in nova and cinder, for creates done using heat resource types OS::Cinder::Volume, OS::Nova::Server, and AWS::EC2::Instance. heat_root_stack_id will be set to the id of the root stack of the resource, heat_stack_id will be set to the id of the resource’s parent stack, heat_stack_name will be set to the name of the resource’s parent stack, heat_path_in_stack will be set to a list of comma delimited strings of stackresourcename and stackname with list[0] being ‘rootstackname’, heat_resource_name will be set to the resource’s name, and heat_resource_uuid will be set to the resource’s orchestration id.
stack_user_domain_id = None (String) Keystone domain ID which contains heat template-defined users. If this option is set, stack_user_domain_name option will be ignored.
stack_user_domain_name = None (String) Keystone domain name which contains heat template-defined users. If stack_user_domain_id option is set, this option is ignored.
stale_token_duration = 30 (Integer) Gap, in seconds, to determine whether the given token is about to expire.
trusts_delegated_roles = (List) Subset of trustor roles to be delegated to heat. If left unset, all roles of a user will be delegated to heat when creating a stack.
[auth_password]  
allowed_auth_uris = (List) Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified.
multi_cloud = False (Boolean) Allow orchestration of multiple clouds.
[ec2authtoken]  
allowed_auth_uris = (List) Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified.
auth_uri = None (String) Authentication Endpoint URI.
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
insecure = False (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
multi_cloud = False (Boolean) Allow orchestration of multiple clouds.
[eventlet_opts]  
client_socket_timeout = 900 (Integer) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ means wait forever.
wsgi_keep_alive = True (Boolean) If False, closes the client socket connection explicitly.
[heat_api]  
backlog = 4096 (Integer) Number of backlog requests to configure the socket with.
bind_host = 0.0.0.0 (IP) Address to bind the server. Useful when selecting a particular network interface.
bind_port = 8004 (Port number) The port on which the server will listen.
cert_file = None (String) Location of the SSL certificate file to use for SSL mode.
key_file = None (String) Location of the SSL key file to use for enabling SSL mode.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
tcp_keepidle = 600 (Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes.
workers = 0 (Integer) Number of workers for Heat service. Default value 0 means, that service will start number of workers equal number of cores on server.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
[oslo_versionedobjects]  
fatal_exception_format_errors = False (Boolean) Make exception message format errors fatal
[paste_deploy]  
api_paste_config = api-paste.ini (String) The API paste config file to use.
flavor = None (String) The flavor to use.
Description of Cloudformation-compatible API configuration options
Configuration option = Default value Description
[DEFAULT]  
instance_connection_https_validate_certificates = 1 (String) Instance connection to CFN/CW API validate certs if SSL is used.
instance_connection_is_secure = 0 (String) Instance connection to CFN/CW API via https.
[heat_api_cfn]  
backlog = 4096 (Integer) Number of backlog requests to configure the socket with.
bind_host = 0.0.0.0 (IP) Address to bind the server. Useful when selecting a particular network interface.
bind_port = 8000 (Port number) The port on which the server will listen.
cert_file = None (String) Location of the SSL certificate file to use for SSL mode.
key_file = None (String) Location of the SSL key file to use for enabling SSL mode.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
tcp_keepidle = 600 (Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes.
workers = 1 (Integer) Number of workers for Heat service.
Description of CloudWatch API configuration options
Configuration option = Default value Description
[DEFAULT]  
enable_cloud_watch_lite = False (Boolean) Enable the legacy OS::Heat::CWLiteAlarm resource.
heat_watch_server_url = (String) URL of the Heat CloudWatch server.
[heat_api_cloudwatch]  
backlog = 4096 (Integer) Number of backlog requests to configure the socket with.
bind_host = 0.0.0.0 (IP) Address to bind the server. Useful when selecting a particular network interface.
bind_port = 8003 (Port number) The port on which the server will listen.
cert_file = None (String) Location of the SSL certificate file to use for SSL mode.
key_file = None (String) Location of the SSL key file to use for enabling SSL mode.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs.)
tcp_keepidle = 600 (Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes.
workers = 1 (Integer) Number of workers for Heat service.
Description of metadata API configuration options
Configuration option = Default value Description
[DEFAULT]  
heat_metadata_server_url = None (String) URL of the Heat metadata server. NOTE: Setting this is only needed if you require instances to use a different endpoint than in the keystone catalog
Description of waitcondition API configuration options
Configuration option = Default value Description
[DEFAULT]  
heat_waitcondition_server_url = None (String) URL of the Heat waitcondition server.

Configure clients

The following options allow configuration of the clients that Orchestration uses to talk to other services.

Description of clients configuration options
Configuration option = Default value Description
[DEFAULT]  
region_name_for_services = None (String) Default region name used to get services endpoints.
[clients]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = publicURL (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = False (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of client backends configuration options
Configuration option = Default value Description
[DEFAULT]  
cloud_backend = heat.engine.clients.OpenStackClients (String) Fully qualified class name to use as a client backend.
Description of aodh clients configuration options
Configuration option = Default value Description
[clients_aodh]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of barbican clients configuration options
Configuration option = Default value Description
[clients_barbican]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of ceilometer clients configuration options
Configuration option = Default value Description
[clients_ceilometer]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of cinder clients configuration options
Configuration option = Default value Description
[clients_cinder]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
http_log_debug = False (Boolean) Allow client’s debug log output.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of designate clients configuration options
Configuration option = Default value Description
[clients_designate]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of glance clients configuration options
Configuration option = Default value Description
[clients_glance]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of heat clients configuration options
Configuration option = Default value Description
[clients_heat]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
url = (String) Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s.
Description of keystone clients configuration options
Configuration option = Default value Description
[clients_keystone]  
auth_uri = (String) Unversioned keystone url in format like http://0.0.0.0:5000.
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of magnum clients configuration options
Configuration option = Default value Description
[clients_magnum]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of manila clients configuration options
Configuration option = Default value Description
[clients_manila]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of mistral clients configuration options
Configuration option = Default value Description
[clients_mistral]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of monasca clients configuration options
Configuration option = Default value Description
[clients_monasca]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of neutron clients configuration options
Configuration option = Default value Description
[clients_neutron]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of nova clients configuration options
Configuration option = Default value Description
[clients_nova]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
http_log_debug = False (Boolean) Allow client’s debug log output.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of sahara clients configuration options
Configuration option = Default value Description
[clients_sahara]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of senlin clients configuration options
Configuration option = Default value Description
[clients_senlin]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of swift clients configuration options
Configuration option = Default value Description
[clients_swift]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of trove clients configuration options
Configuration option = Default value Description
[clients_trove]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.
Description of zaqar clients configuration options
Configuration option = Default value Description
[clients_zaqar]  
ca_file = None (String) Optional CA cert file to use in SSL connections.
cert_file = None (String) Optional PEM-formatted certificate chain file.
endpoint_type = None (String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
insecure = None (Boolean) If set, then the server’s certificate will not be verified.
key_file = None (String) Optional PEM-formatted file that contains the private key.

Additional configuration options for Orchestration service

These options can also be set in the heat.conf file.

Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
client_retry_limit = 2 (Integer) Number of times to retry when a client encounters an expected intermittent error. Set to 0 to disable retries.
convergence_engine = True (Boolean) Enables engine with convergence architecture. All stacks with this option will be created using convergence engine.
default_deployment_signal_transport = CFN_SIGNAL (String) Template default for how the server should signal to heat with the deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT (requires object-store endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar queue to be signaled using the provided keystone credentials.
default_software_config_transport = POLL_SERVER_CFN (String) Template default for how the server should receive the metadata required for software configuration. POLL_SERVER_CFN will allow calls to the cfn API action DescribeStackResource authenticated with the provided keypair (requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the provided keystone credentials (requires keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL will create and populate a Swift TempURL with metadata for polling (requires object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a dedicated zaqar queue and post the metadata for polling.
deferred_auth_method = trusts (String) Select deferred auth method, stored password or trusts.
environment_dir = /etc/heat/environment.d (String) The directory to search for environment files.
error_wait_time = 240 (Integer) The amount of time in seconds after an error has occurred that tasks may continue to run before being cancelled.
event_purge_batch_size = 10 (Integer) Controls how many events will be pruned whenever a stack’s events exceed max_events_per_stack. Set this lower to keep more events at the expense of more frequent purges.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
host = localhost (String) Name of the engine node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address.
keystone_backend = heat.engine.clients.os.keystone.heat_keystoneclient.KsClientWrapper (String) Fully qualified class name to use as a keystone backend.
max_interface_check_attempts = 10 (Integer) Number of times to check whether an interface has been attached or detached.
periodic_interval = 60 (Integer) Seconds between running periodic tasks.
plugin_dirs = /usr/lib64/heat, /usr/lib/heat, /usr/local/lib/heat, /usr/local/lib64/heat (List) List of directories to search for plug-ins.
reauthentication_auth_method = (String) Allow reauthentication on token expiry, such that long-running tasks may complete. Note this defeats the expiry of any provided user tokens.
template_dir = /etc/heat/templates (String) The directory to search for template files.
[constraint_validation_cache]  
caching = True (Boolean) Toggle to enable/disable caching when Orchestration Engine validates property constraints of stack.During property validation with constraints Orchestration Engine caches requests to other OpenStack services. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature.
expiration_time = 60 (Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of validation constraints.
[resource_finder_cache]  
caching = True (Boolean) Toggle to enable/disable caching when Orchestration Engine looks for other OpenStack service resources using name or id. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature.
expiration_time = 3600 (Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of OpenStack service finder functions.
[revision]  
heat_revision = unknown (String) Heat build revision. If you would prefer to manage your build revision separately, you can move this section to a different file and add it as another config option.
[service_extension_cache]  
caching = True (Boolean) Toggle to enable/disable caching when Orchestration Engine retrieves extensions from other OpenStack services. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature.
expiration_time = 3600 (Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of service extensions.
[volumes]  
backups_enabled = True (Boolean) Indicate if cinder-backup service is enabled. This is a temporary workaround until cinder-backup service becomes discoverable, see LP#1334856.
[yaql]  
limit_iterators = 200 (Integer) The maximum number of elements in collection expression can take for its evaluation.
memory_quota = 10000 (Integer) The maximum size of memory in bytes that expression can take for its evaluation.
Description of crypt configuration options
Configuration option = Default value Description
[DEFAULT]  
auth_encryption_key = notgood but just long enough i t (String) Key used to encrypt authentication info in the database. Length of this key must be 32 characters.
Description of load balancer configuration options
Configuration option = Default value Description
[DEFAULT]  
loadbalancer_template = None (String) Custom template for the built-in loadbalancer nested stack.
Description of quota configuration options
Configuration option = Default value Description
[DEFAULT]  
max_events_per_stack = 1000 (Integer) Maximum events that will be available per stack. Older events will be deleted when this is reached. Set to 0 for unlimited events per stack.
max_nested_stack_depth = 5 (Integer) Maximum depth allowed when using nested stacks.
max_resources_per_stack = 1000 (Integer) Maximum resources allowed per top-level stack. -1 stands for unlimited.
max_server_name_length = 53 (Integer) Maximum length of a server name to be used in nova.
max_stacks_per_tenant = 100 (Integer) Maximum number of stacks any one tenant may have active at one time.
max_template_size = 524288 (Integer) Maximum raw byte size of any template.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of testing configuration options
Configuration option = Default value Description
[profiler]  
connection_string = messaging://

(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging.

Examples of possible values:

  • messaging://: use oslo_messaging driver for sending notifications.
enabled = False

(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature).

Possible values:

  • True: Enables the feature
  • False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty.
hmac_keys = SECRET_KEY

(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.

Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources.

trace_sqlalchemy = False

(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced).

Possible values:

  • True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.
  • False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way.
Description of trustee configuration options
Configuration option = Default value Description
[trustee]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load

Orchestration log files

The corresponding log file of each Orchestration service is stored in the /var/log/heat/ directory of the host on which each service runs.

Log files used by Orchestration services
Log filename Service that logs to the file
heat-api.log Orchestration service API Service
heat-engine.log Orchestration service Engine Service
heat-manage.log Orchestration service events

New, updated, and deprecated options in Newton for Orchestration

New options
Option = default value (Type) Help string
[DEFAULT] max_server_name_length = 53 (IntOpt) Maximum length of a server name to be used in nova.
[DEFAULT] template_dir = /etc/heat/templates (StrOpt) The directory to search for template files.
[clients_aodh] ca_file = None (StrOpt) Optional CA cert file to use in SSL connections.
[clients_aodh] cert_file = None (StrOpt) Optional PEM-formatted certificate chain file.
[clients_aodh] endpoint_type = None (StrOpt) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
[clients_aodh] insecure = None (BoolOpt) If set, then the server’s certificate will not be verified.
[clients_aodh] key_file = None (StrOpt) Optional PEM-formatted file that contains the private key.
[clients_monasca] ca_file = None (StrOpt) Optional CA cert file to use in SSL connections.
[clients_monasca] cert_file = None (StrOpt) Optional PEM-formatted certificate chain file.
[clients_monasca] endpoint_type = None (StrOpt) Type of endpoint in Identity service catalog to use for communication with the OpenStack service.
[clients_monasca] insecure = None (BoolOpt) If set, then the server’s certificate will not be verified.
[clients_monasca] key_file = None (StrOpt) Optional PEM-formatted file that contains the private key.
[trustee] auth_type = None (Opt) Authentication type to load
[volumes] backups_enabled = True (BoolOpt) Indicate if cinder-backup service is enabled. This is a temporary workaround until cinder-backup service becomes discoverable, see LP#1334856.
[yaql] limit_iterators = 200 (IntOpt) The maximum number of elements in collection expression can take for its evaluation.
[yaql] memory_quota = 10000 (IntOpt) The maximum size of memory in bytes that expression can take for its evaluation.
New default values
Option Previous default value New default value
[DEFAULT] convergence_engine False True
[DEFAULT] keystone_backend heat.common.heat_keystoneclient.KeystoneClientV3 heat.engine.clients.os.keystone.heat_keystoneclient.KsClientWrapper
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None

The Orchestration service is designed to manage the lifecycle of infrastructure and applications within OpenStack clouds. Its various agents and services are configured in the /etc/heat/heat.conf file.

To install Orchestration, see the Newton Installation Tutorials and Guides for your distribution.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Shared File Systems service

Introduction to the Shared File Systems service

The Shared File Systems service provides shared file systems that Compute instances can consume.

The Shared File Systems service provides:

manila-api
A WSGI app that authenticates and routes requests throughout the Shared File Systems service. It supports the OpenStack APIs.
manila-data
A standalone service whose purpose is to receive requests, process data operations with potentially long running time such as copying, share migration or backup.
manila-scheduler
Schedules and routes requests to the appropriate share service. The scheduler uses configurable filters and weighers to route requests. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Share Types, and Capabilities as well as custom filters.
manila-share
Manages back-end devices that provide shared file systems. A manila-share service can run in one of two modes, with or without handling of share servers. Share servers export file shares via share networks. When share servers are not used, the networking requirements are handled outside of Manila.

The Shared File Systems service contains the following components:

Back-end storage devices
The Shared File Services service requires some form of back-end shared file system provider that the service is built on. The reference implementation uses the Block Storage service (Cinder) and a service VM to provide shares. Additional drivers are used to access shared file systems from a variety of vendor solutions.
Users and tenants (projects)

The Shared File Systems service can be used by many different cloud computing consumers or customers (tenants on a shared system), using role-based access assignments. Roles control the actions that a user is allowed to perform. In the default configuration, most actions do not require a particular role unless they are restricted to administrators, but this can be configured by the system administrator in the appropriate policy.json file that maintains the rules. A user’s access to manage particular shares is limited by tenant. Guest access to mount and use shares is secured by IP and/or user access rules. Quotas used to control resource consumption across available hardware resources are per tenant.

For tenants, quota controls are available to limit:

  • The number of shares that can be created.
  • The number of gigabytes that can be provisioned for shares.
  • The number of share snapshots that can be created.
  • The number of gigabytes that can be provisioned for share snapshots.
  • The number of share networks that can be created.

You can revise the default quota values with the Shared File Systems CLI, so the limits placed by quotas are editable by admin users.

Shares, snapshots, and share networks

The basic resources offered by the Shared File Systems service are shares, snapshots and share networks:

Shares
A share is a unit of storage with a protocol, a size, and an access list. Shares are the basic primitive provided by Manila. All shares exist on a backend. Some shares are associated with share networks and share servers. The main protocols supported are NFS and CIFS, but other protocols are supported as well.
Snapshots
A snapshot is a point in time copy of a share. Snapshots can only be used to create new shares (containing the snapshotted data). Shares cannot be deleted until all associated snapshots are deleted.
Share networks
A share network is a tenant-defined object that informs Manila about the security and network configuration for a group of shares. Share networks are only relevant for backends that manage share servers. A share network contains a security service and network/subnet.

Shared File Systems API configuration

Configuration options

The following options allow configuration of the APIs that Shared File Systems service supports.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
admin_network_config_group = None (String) If share driver requires to setup admin network for share, then define network plugin config options in some separate config group and set its name here. Used only with another option ‘driver_handles_share_servers’ set to ‘True’.
admin_network_id = None (String) ID of neutron network used to communicate with admin network, to create additional admin export locations on.
admin_subnet_id = None (String) ID of neutron subnet used to communicate with admin network, to create additional admin export locations on. Related to ‘admin_network_id’.
api_paste_config = api-paste.ini (String) File name for the paste.deploy config for manila-api.
api_rate_limit = True (Boolean) Whether to rate limit the API.
db_backend = sqlalchemy (String) The backend to use for database.
max_header_line = 16384 (Integer) Maximum line size of message headers to be accepted. Option max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
osapi_max_limit = 1000 (Integer) The maximum number of items returned in a single response from a collection resource.
osapi_share_base_URL = None (String) Base URL to be presented to users in links to the Share API
osapi_share_ext_list = (List) Specify list of extensions to load when using osapi_share_extension option with manila.api.contrib.select_extensions.
osapi_share_extension = manila.api.contrib.standard_extensions (List) The osapi share extensions to load.
osapi_share_listen = :: (String) IP address for OpenStack Share API to listen on.
osapi_share_listen_port = 8786 (Port number) Port for OpenStack Share API to listen on.
osapi_share_workers = 1 (Integer) Number of workers for OpenStack Share API service.
share_api_class = manila.share.api.API (String) The full class name of the share API class to use.
volume_api_class = manila.volume.cinder.API (String) The full class name of the Volume API class to use.
volume_name_template = manila-share-%s (String) Volume name template.
volume_snapshot_name_template = manila-snapshot-%s (String) Volume snapshot name template.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.

Share drivers

Generic approach for share provisioning

The Shared File Systems service can be configured to use Compute VMs and Block Storage service volumes. There are two modules that handle them in the Shared File Systems service:

  • The service_instance module creates VMs in Compute with a predefined image called service image. This module can be used by any driver for provisioning of service VMs to be able to separate share resources among tenants.
  • The generic module operates with Block Storage service volumes and VMs created by the service_instance module, then creates shared filesystems based on volumes attached to VMs.
Network configurations

Each driver can handle networking in its own way, see: https://wiki.openstack.org/wiki/manila/Networking.

One of the two possible configurations can be chosen for share provisioning using the service_instance module:

  • Service VM has one network interface from a network that is connected to a public router. For successful creation of a share, the user network should be connected to a public router, too.
  • Service VM has two network interfaces, the first one is connected to the service network, the second one is connected directly to the user’s network.
Requirements for service image
  • Linux based distro
  • NFS server
  • Samba server >= 3.2.0, that can be configured by data stored in registry
  • SSH server
  • Two network interfaces configured to DHCP (see network approaches)
  • exportfs and net conf libraries used for share actions
  • The following files will be used, so if their paths differ one needs to create at least symlinks for them:
    • /etc/exports: permanent file with NFS exports.
    • /var/lib/nfs/etab: temporary file with NFS exports used by exportfs.
    • /etc/fstab: permanent file with mounted filesystems.
    • /etc/mtab: temporary file with mounted filesystems used by mount.
Supported shared filesystems and operations

The driver supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported for NFS and CIFS.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

  • Extend a share.

  • Shrink a share.

Known restrictions
  • One of nova’s configurations only allows 26 shares per server. This limit comes from the maximum number of virtual PCI interfaces that are used for block device attaching. There are 28 virtual PCI interfaces, in this configuration, two of them are used for server needs and the other 26 are used for attaching block devices that are used for shares.
Driver options

The following table contains the configuration options specific to this driver.

Description of Generic Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
cinder_admin_auth_url = http://localhost:5000/v2.0 (String) DEPRECATED: Identity service URL. This option isn’t used any longer. Please use [cinder] auth_url instead.
cinder_admin_password = None (String) DEPRECATED: Cinder admin password. This option isn’t used any longer. Please use [cinder] password instead.
cinder_admin_tenant_name = service (String) DEPRECATED: Cinder admin tenant name. This option isn’t used any longer. Please use [cinder] tenant_name instead.
cinder_admin_username = cinder (String) DEPRECATED: Cinder admin username. This option isn’t used any longer. Please use [cinder] username instead.
cinder_catalog_info = volume:cinder:publicURL (String) DEPRECATED: Info to match when looking for cinder in the service catalog. Format is separated values of the form: <service_type>:<service_name>:<endpoint_type> This option isn’t used any longer.
cinder_volume_type = None (String) Name or id of cinder volume type which will be used for all volumes created by driver.
connect_share_server_to_tenant_network = False (Boolean) Attach share server directly to share network. Used only with Neutron and if driver_handles_share_servers=True.
container_volume_group = manila_docker_volumes (String) LVM volume group to use for volumes. This volume group must be created by the cloud administrator independently from manila operations.
driver_handles_share_servers = None (Boolean) There are two possible approaches for share drivers in Manila. First is when share driver is able to handle share-servers and second when not. Drivers can support either both or only one of these approaches. So, set this opt to True if share driver is able to handle share servers and it is desired mode else set False. It is set to None by default to make this choice intentional.
goodness_function = None (String) String representation for an equation that will be used to determine the goodness of a host.
interface_driver = manila.network.linux.interface.OVSInterfaceDriver (String) Vif driver. Used only with Neutron and if driver_handles_share_servers=True.
manila_service_keypair_name = manila-service (String) Keypair name that will be created and used for service instances. Only used if driver_handles_share_servers=True.
max_time_to_attach = 120 (Integer) Maximum time to wait for attaching cinder volume.
max_time_to_build_instance = 300 (Integer) Maximum time in seconds to wait for creating service instance.
max_time_to_create_volume = 180 (Integer) Maximum time to wait for creating cinder volume.
max_time_to_extend_volume = 180 (Integer) Maximum time to wait for extending cinder volume.
ovs_integration_bridge = br-int (String) Name of Open vSwitch bridge to use.
path_to_private_key = None (String) Path to host’s private key.
path_to_public_key = ~/.ssh/id_rsa.pub (String) Path to hosts public key. Only used if driver_handles_share_servers=True.
protocol_access_mapping = {'ip': ['nfs'], 'user': ['cifs']} (Dict) Protocol access mapping for this backend. Should be a dictionary comprised of {‘access_type1’: [‘share_proto1’, ‘share_proto2’], ‘access_type2’: [‘share_proto2’, ‘share_proto3’]}.
service_image_name = manila-service-image (String) Name of image in Glance, that will be used for service instance creation. Only used if driver_handles_share_servers=True.
service_instance_flavor_id = 100 (Integer) ID of flavor, that will be used for service instance creation. Only used if driver_handles_share_servers=True.
service_instance_name_or_id = None (String) Name or ID of service instance in Nova to use for share exports. Used only when share servers handling is disabled.
service_instance_name_template = manila_service_instance_%s (String) Name of service instance. Only used if driver_handles_share_servers=True.
service_instance_network_helper_type = neutron (String) Allowed values are [‘nova’, ‘neutron’]. Only used if driver_handles_share_servers=True.
service_instance_password = None (String) Password for service instance user.
service_instance_security_group = manila-service (String) Security group name, that will be used for service instance creation. Only used if driver_handles_share_servers=True.
service_instance_smb_config_path = $share_mount_path/smb.conf (String) Path to SMB config in service instance.
service_instance_user = None (String) User in service instance that will be used for authentication.
service_net_name_or_ip = None (String) Can be either name of network that is used by service instance within Nova to get IP address or IP address itself for managing shares there. Used only when share servers handling is disabled.
service_network_cidr = 10.254.0.0/16 (String) CIDR of manila service network. Used only with Neutron and if driver_handles_share_servers=True.
service_network_division_mask = 28 (Integer) This mask is used for dividing service network into subnets, IP capacity of subnet with this mask directly defines possible amount of created service VMs per tenant’s subnet. Used only with Neutron and if driver_handles_share_servers=True.
service_network_name = manila_service_network (String) Name of manila service network. Used only with Neutron. Only used if driver_handles_share_servers=True.
share_helpers = CIFS=manila.share.drivers.helpers.CIFSHelperIPAccess, NFS=manila.share.drivers.helpers.NFSHelper (List) Specify list of share export helpers.
share_mount_path = /shares (String) Parent path in service instance where shares will be mounted.
share_mount_template = mount -vt %(proto)s %(options)s %(export)s %(path)s (String) The template for mounting shares for this backend. Must specify the executable with all necessary parameters for the protocol supported. ‘proto’ template element may not be required if included in the command. ‘export’ and ‘path’ template elements are required. It is advisable to separate different commands per backend.
share_unmount_template = umount -v %(path)s (String) The template for unmounting shares for this backend. Must specify the executable with all necessary parameters for the protocol supported. ‘path’ template element is required. It is advisable to separate different commands per backend.
share_volume_fstype = ext4 (String) Filesystem type of the share volume.
tenant_net_name_or_ip = None (String) Can be either name of network that is used by service instance within Nova to get IP address or IP address itself for exporting shares. Used only when share servers handling is disabled.
volume_name_template = manila-share-%s (String) Volume name template.
volume_snapshot_name_template = manila-snapshot-%s (String) Volume snapshot name template.
[cinder]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to cinder.
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
ca_certificates_file = None (String) Location of CA certificates file to use for cinder client requests.
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
cross_az_attach = True (Boolean) Allow attaching between instances and volumes in different availability zones.
http_retries = 3 (Integer) Number of cinderclient retries on failed HTTP calls.
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
timeout = None (Integer) Timeout value for http requests
[neutron]  
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
timeout = None (Integer) Timeout value for http requests
[nova]  
api_insecure = False (Boolean) Allow to perform insecure SSL requests to nova.
api_microversion = 2.10 (String) Version of Nova API to be used.
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
ca_certificates_file = None (String) Location of CA certificates file to use for nova client requests.
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
keyfile = None (String) PEM encoded client certificate key file
timeout = None (Integer) Timeout value for http requests
CephFS Native driver

The CephFS Native driver enables the Shared File Systems service to export shared file systems to guests using the Ceph network protocol. Guests require a Ceph client in order to mount the file system.

Access is controlled via Ceph’s cephx authentication system. When a user requests share access for an ID, Ceph creates a corresponding Ceph auth ID and a secret key, if they do not already exist, and authorizes the ID to access the share. The client can then mount the share using the ID and the secret key.

To learn more about configuring Ceph clients to access the shares created using this driver, please see the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/). If you choose to use the kernel client rather than the FUSE client, the share size limits set in the Shared File Systems service may not be obeyed.

Supported shared file systems and operations

The driver supports CephFS shares.

The following operations are supported with CephFS back end:

  • Create a share.

  • Delete a share.

  • Allow share access.

    • read-only access level is supported.
    • read-write access level is supported.

    Note the following limitation for CephFS shares:

    • Only cephx access type is supported.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a consistency group (CG).

  • Delete a CG.

  • Create a CG snapshot.

  • Delete a CG snapshot.

Requirements
  • Mitaka or later versions of manila.
  • Jewel or later versions of Ceph.
  • A Ceph cluster with a file system configured ( http://docs.ceph.com/docs/master/cephfs/createfs/)
  • ceph-common package installed in the servers running the manila-share service.
  • Ceph client installed in the guest, preferably the FUSE based client, ceph-fuse.
  • Network connectivity between your Ceph cluster’s public network and the servers running the manila-share service.
  • Network connectivity between your Ceph cluster’s public network and guests.

Important

A manila share backed onto CephFS is only as good as the underlying file system. Take care when configuring your Ceph cluster, and consult the latest guidance on the use of CephFS in the Ceph documentation ( http://docs.ceph.com/docs/master/cephfs/).

Authorize the driver to communicate with Ceph

Run the following commands to create a Ceph identity for the Shared File Systems service to use:

read -d '' MON_CAPS << EOF
allow r,
allow command "auth del",
allow command "auth caps",
allow command "auth get",
allow command "auth get-or-create"
EOF

ceph auth get-or-create client.manila -o manila.keyring \
mds 'allow *' \
osd 'allow rw' \
mon "$MON_CAPS"

manila.keyring, along with your ceph.conf file, then needs to be placed on the server running the manila-share service.

Enable snapshots in Ceph if you want to use them in the Shared File Systems service:

ceph mds set allow_new_snaps true --yes-i-really-mean-it

In the server running the manila-share service, you can place the ceph.conf and manila.keyring files in the /etc/ceph directory. Set the same owner for the manila-share process and the manila.keyring file. Add the following section to the ceph.conf file.

[client.manila]
client mount uid = 0
client mount gid = 0
log file = /opt/stack/logs/ceph-client.manila.log
admin socket = /opt/stack/status/stack/ceph-$name.$pid.asok
keyring = /etc/ceph/manila.keyring

It is advisable to modify the Ceph client’s admin socket file and log file locations so that they are co-located with the Shared File Systems services’ pid files and log files respectively.

Configure CephFS back end in manila.conf
  1. Add CephFS to enabled_share_protocols (enforced at the Shared File Systems service’s API layer). In this example we leave NFS and CIFS enabled, although you can remove these if you only use CephFS:

    enabled_share_protocols = NFS,CIFS,CEPHFS
    
  2. Refer to the following table for the list of all the cephfs_native driver-specific configuration options.

    Description of CephFS Driver configuration options
    Configuration option = Default value Description
    [DEFAULT]  
    cephfs_auth_id = manila (String) The name of the ceph auth identity to use.
    cephfs_cluster_name = None (String) The name of the cluster in use, if it is not the default (‘ceph’).
    cephfs_conf_path = (String) Fully qualified path to the ceph.conf file.
    cephfs_enable_snapshots = False (Boolean) Whether to enable snapshots in this driver.

    Create a section to define a CephFS back end:

    [cephfs1]
    driver_handles_share_servers = False
    share_backend_name = CEPHFS1
    share_driver = manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver
    cephfs_conf_path = /etc/ceph/ceph.conf
    cephfs_auth_id = manila
    cephfs_cluster_name = ceph
    cephfs_enable_snapshots = True
    

    Set cephfs_enable_snapshots to True in the section to let the driver perform snapshot-related operations. Also set the driver-handles-share-servers to False as the driver does not manage the lifecycle of share-servers.

  3. Edit enabled_share_backends to point to the driver’s back-end section using the section name. In this example we are also including another back end (generic1), you would include whatever other back ends you have configured.

    enabled_share_backends = generic1,cephfs1
    
Creating shares

The default share type may have driver_handles_share_servers set to True. Configure a share type suitable for CephFS:

manila type-create cephfstype false

manila type-set cephfstype set share_backend_name='CEPHFS1'

Then create a share:

manila create --share-type cephfstype --name cephshare1 cephfs 1

Note the export location of the share:

manila share-export-location-list cephshare1

The export location of the share contains the Ceph monitor (mon) addresses and ports, and the path to be mounted. It is of the form, {mon ip addr:port}[,{mon ip addr:port}]:{path to be mounted}

Allowing access to shares

Allow Ceph auth ID alice access to the share using cephx access type.

manila access-allow cephshare1 cephx alice

Note the access status and the secret access key of alice.

manila access-list cephshare1
Mounting shares using FUSE client

Using the secret key of the authorized ID alice, create a keyring file alice.keyring.

[client.alice]
        key = AQA8+ANW/4ZWNRAAOtWJMFPEihBA1unFImJczA==

Using the monitor IP addresses from the share’s export location, create a configuration file, ceph.conf:

[client]
        client quota = true
        mon host = 192.168.1.7:6789, 192.168.1.8:6789, 192.168.1.9:6789

Finally, mount the file system, substituting the file names of the keyring and configuration files you just created, and substituting the path to be mounted from the share’s export location:

sudo ceph-fuse ~/mnt \
--id=alice \
--conf=./ceph.conf \
--keyring=./alice.keyring \
--client-mountpoint=/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c
Known restrictions

Consider the driver as a building block for supporting multi-tenant workloads in the future. However, it can be used in private cloud deployments.

  • The guests have direct access to Ceph’s public network.
  • The snapshot support of the driver is disabled by default. cephfs_enable_snapshots configuration option needs to be set to True to allow snapshot operations.
  • Snapshots are read-only. A user can read a snapshot’s contents from the .snap/{manila-snapshot-id}_{unknown-id} folder within the mounted share.
  • To restrict share sizes, CephFS uses quotas that are enforced in the client side. The CephFS clients are relied on to respect quotas.
Security
  • Each share’s data is mapped to a distinct Ceph RADOS namespace. A guest is restricted to access only that particular RADOS namespace.

  • An additional level of resource isolation can be provided by mapping a share’s contents to a separate RADOS pool. This layout would be preferred only for cloud deployments with a limited number of shares needing strong resource separation. You can do this by setting a share type specification, cephfs:data_isolated for the share type used by the cephfs driver.

    manila type-key cephfstype set cephfs:data_isolated=True
    
  • Untrusted manila guests pose security risks to the Ceph storage cluster as they would have direct access to the cluster’s public network.

GlusterFS driver

GlusterFS driver uses GlusterFS, an open source distributed file system, as the storage back end for serving file shares to the Shared File Systems clients.

Supported shared filesystems and operations

The driver supports NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported
    • Only read-write access is supported.
  • Deny share access.

Requirements
  • Install glusterfs-server package, version >= 3.5.x, on the storage back end.
  • Install NFS-Ganesha, version >=2.1, if using NFS-Ganesha as the NFS server for the GlusterFS back end.
  • Install glusterfs and glusterfs-fuse package, version >=3.5.x, on the Shared File Systems service host.
  • Establish network connection between the Shared File Systems service host and the storage back end.
Shared File Systems service driver configuration setting

The following parameters in the Shared File Systems service’s configuration file manila.conf need to be set:

share_driver = manila.share.drivers.glusterfs.GlusterfsShareDriver

If the back-end GlusterFS server runs on the Shared File Systems service host machine:

glusterfs_target = <glustervolserver>:/<glustervolid>

If the back-end GlusterFS server runs remotely:

glusterfs_target = <username>@<glustervolserver>:/<glustervolid>
Known restrictions
  • The driver does not support network segmented multi-tenancy model, but instead works over a flat network, where the tenants share a network.
  • If NFS Ganesha is the NFS server used by the GlusterFS back end, then the shares can be accessed by NFSv3 and v4 protocols. However, if Gluster NFS is used by the GlusterFS back end, then the shares can only be accessed by NFSv3 protocol.
  • All Shared File Systems service shares, which map to subdirectories within a GlusterFS volume, are currently created within a single GlusterFS volume of a GlusterFS storage pool.
  • The driver does not provide read-only access level for shares.
Driver options

The following table contains the configuration options specific to the share driver.

Description of GlusterFS Share Drivers configuration options
Configuration option = Default value Description
[DEFAULT]  
glusterfs_ganesha_server_ip = None (String) Remote Ganesha server node’s IP address.
glusterfs_ganesha_server_password = None (String) Remote Ganesha server node’s login password. This is not required if ‘glusterfs_path_to_private_key’ is configured.
glusterfs_ganesha_server_username = root (String) Remote Ganesha server node’s username.
glusterfs_mount_point_base = $state_path/mnt (String) Base directory containing mount points for Gluster volumes.
glusterfs_nfs_server_type = Gluster (String) Type of NFS server that mediate access to the Gluster volumes (Gluster or Ganesha).
glusterfs_path_to_private_key = None (String) Path of Manila host’s private SSH key file.
glusterfs_server_password = None (String) Remote GlusterFS server node’s login password. This is not required if ‘glusterfs_path_to_private_key’ is configured.
glusterfs_servers = (List) List of GlusterFS servers that can be used to create shares. Each GlusterFS server should be of the form [remoteuser@]<volserver>, and they are assumed to belong to distinct Gluster clusters.
glusterfs_share_layout = None (String) Specifies GlusterFS share layout, that is, the method of associating backing GlusterFS resources to shares.
glusterfs_target = None (String) Specifies the GlusterFS volume to be mounted on the Manila host. It is of the form [remoteuser@]<volserver>:<volid>.
glusterfs_volume_pattern = None (String) Regular expression template used to filter GlusterFS volumes for share creation. The regex template can optionally (ie. with support of the GlusterFS backend) contain the #{size} parameter which matches an integer (sequence of digits) in which case the value shall be interpreted as size of the volume in GB. Examples: “manila-share-volume-d+$”, “manila-share-volume-#{size}G-d+$”; with matching volume names, respectively: “manila-share-volume-12”, “manila-share-volume-3G-13”. In latter example, the number that matches “#{size}”, that is, 3, is an indication that the size of volume is 3G.
GlusterFS Native driver

GlusterFS Native driver uses GlusterFS, an open source distributed file system, as the storage back end for serving file shares to Shared File Systems service clients.

A Shared File Systems service share is a GlusterFS volume. This driver uses flat-network (share-server-less) model. Instances directly talk with the GlusterFS back end storage pool. The instances use glusterfs protocol to mount the GlusterFS shares. Access to each share is allowed via TLS Certificates. Only the instance which has the TLS trust established with the GlusterFS back end can mount and hence use the share. Currently only read-write (rw) access is supported.

Network approach

L3 connectivity between the storage back end and the host running the Shared File Systems share service should exist.

Multi-tenancy model

The driver does not support network segmented multi-tenancy model. Instead multi-tenancy is supported using tenant specific TLS certificates.

Supported shared filesystems and operations

The driver supports GlusterFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only access by TLS Certificates (cert access type) is supported.
    • Only read-write access is supported.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

Requirements
  • Install glusterfs-server package, version >= 3.6.x, on the storage back end.
  • Install glusterfs and glusterfs-fuse package, version >= 3.6.x, on the Shared File Systems service host.
  • Establish network connection between the Shared File Systems service host and the storage back end.
Shared File Systems service driver configuration setting

The following parameters in the Shared File Systems service’s configuration file need to be set:

share_driver = manila.share.drivers.glusterfs_native.GlusterfsNativeShareDriver
glusterfs_servers = glustervolserver
glusterfs_volume_pattern = manila-share-volume-\d+$

The parameters are:

glusterfs_servers

List of GlusterFS servers which provide volumes that can be used to create shares. The servers are expected to be of distinct Gluster clusters, so they should not be Gluster peers. Each server should be of the form [<remoteuser>@]<glustervolserver>.

The optional <remoteuser>@ part of the server URI indicates SSH access for cluster management (see related optional parameters below). If it is not given, direct command line management is performed (the Shared File Systems service host is assumed to be part of the GlusterFS cluster the server belongs to).

glusterfs_volume_pattern
Regular expression template used to filter GlusterFS volumes for share creation. The regular expression template can contain the #{size} parameter which matches a number and the value will be interpreted as size of the volume in GB. Examples: manila-share-volume-\d+$, manila-share-volume-#{size}G-\d+$; with matching volume names, respectively: manila-share-volume-12, manila-share-volume-3G-13. In the latter example, the number that matches #{size}, which is 3, is an indication that the size of volume is 3 GB. On share creation, the Shared File Systems service picks volumes at least as large as the requested one.

When setting up GlusterFS shares, note the following:

  • GlusterFS volumes are not created on demand. A pre-existing set of GlusterFS volumes should be supplied by the GlusterFS cluster(s), conforming to the naming convention encoded by glusterfs_volume_pattern. However, the GlusterFS endpoint is allowed to extend this set any time, so the Shared File Systems service and GlusterFS endpoints are expected to communicate volume supply and demand out-of-band.
  • Certificate setup, also known as trust setup, between instance and storage back end is out of band of the Shared File Systems service.
  • For the Shared File Systems service to use GlusterFS volumes, the name of the trashcan directory in GlusterFS volumes must not be changed from the default.
Driver options

Configuration options specific to this driver are documented here in Description of GlusterFS Share Drivers configuration options.

HDFS native driver

The HDFS native driver is a plug-in for the Shared File Systems service. It uses Hadoop distributed file system (HDFS), a distributed file system designed to hold very large amounts of data, and provide high-throughput access to the data.

A Shared File Systems service share in this driver is a subdirectory in the hdfs root directory. Instances talk directly to the HDFS storage back end using the hdfs protocol. Access to each share is allowed by user based access type, which is aligned with HDFS ACLs to support access control of multiple users and groups.

Network configuration

The storage back end and Shared File Systems service hosts should be in a flat network, otherwise L3 connectivity between them should exist.

Supported shared filesystems and operations

The driver supports HDFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only user access type is supported.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

Requirements
  • Install HDFS package, version >= 2.4.x, on the storage back end.
  • To enable access control, the HDFS file system must have ACLs enabled.
  • Establish network connection between the Shared File Systems service host and storage back end.
Shared File Systems service driver configuration

To enable the driver, set the share_driver option in file manila.conf and add other options as appropriate.

share_driver = manila.share.drivers.hdfs.hdfs_native.HDFSNativeShareDriver
Known restrictions
  • This driver does not support network segmented multi-tenancy model. Instead multi-tenancy is supported by the tenant specific user authentication.
  • Only support for single HDFS namenode in Kilo release.
Driver options

The following table contains the configuration options specific to the share driver.

Description of HDFS Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hdfs_namenode_ip = None (String) The IP of the HDFS namenode.
hdfs_namenode_port = 9000 (Port number) The port of HDFS namenode service.
hdfs_ssh_name = None (String) HDFS namenode ssh login name.
hdfs_ssh_port = 22 (Port number) HDFS namenode SSH port.
hdfs_ssh_private_key = None (String) Path to HDFS namenode SSH private key for login.
hdfs_ssh_pw = None (String) HDFS namenode SSH login password, This parameter is not necessary, if ‘hdfs_ssh_private_key’ is configured.
LVM share driver

The Shared File Systems service can be configured to use LVM share driver. LVM share driver relies solely on LVM running on the same host with manila-share service. It does not require any services not related to the Shared File Systems service to be present to work.

Prerequisites

The following packages must be installed on the same host with manila-share service:

  • NFS server
  • Samba server >= 3.2.0
  • LVM2 >= 2.02.66

Services must be up and running, ports used by the services must not be blocked. A node with manila-share service should be accessible to share service users.

LVM should be preconfigured. By default, LVM driver expects to find a volume group named lvm-shares. This volume group will be used by the driver for share provisioning. It should be managed by node administrator separately.

Shared File Systems service driver configuration setting

To use the driver, one should set up a corresponding back end. A driver must be explicitly specified as well as export IP address. A minimal back-end specification that will enable LVM share driver is presented below:

[LVM_sample_backend]
driver_handles_share_servers = False
share_driver = manila.share.drivers.lvm.LVMShareDriver
lvm_share_export_ip = 1.2.3.4

In the example above, lvm_share_export_ip is the address to be used by clients for accessing shares. In the simplest case, it should be the same as host’s address.

Supported shared file systems and operations

The driver supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported for NFS.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

  • Extend a share.

Known restrictions
  • LVM driver should not be used on a host running Neutron agents, simultaneous usage might cause issues with share deletion (shares will not get deleted from volume groups).
Driver options

The following table contains the configuration options specific to this driver.

Description of LVM Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
lvm_share_export_ip = None (String) IP to be added to export string.
lvm_share_export_root = $state_path/mnt (String) Base folder where exported shares are located.
lvm_share_helpers = CIFS=manila.share.drivers.helpers.CIFSHelperUserAccess, NFS=manila.share.drivers.helpers.NFSHelper (List) Specify list of share export helpers.
lvm_share_mirrors = 0 (Integer) If set, create LVMs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space.
lvm_share_volume_group = lvm-shares (String) Name for the VG that will contain exported shares.
ZFS (on Linux) driver

Manila ZFSonLinux share driver uses ZFS file system for exporting NFS shares. Written and tested using Linux version of ZFS.

Requirements
  • NFS daemon that can be handled through exportfs app.
  • ZFS file system packages, either Kernel or FUSE versions.
  • ZFS zpools that are going to be used by Manila should exist and be configured as desired. Manila will not change zpool configuration.
  • For remote ZFS hosts according to manila-share service host SSH should be installed.
  • For ZFS hosts that support replication:
    • SSH access for each other should be passwordless.
    • Service IP addresses should be available by ZFS hosts for each other.
Supported shared filesystems and operations

The driver supports NFS shares.

The following operations are supported:

  • Create a share.
  • Delete a share.
  • Allow share access.
    • Only IP access type is supported.
    • Both access levels are supported - RW and RO.
  • Deny share access.
  • Create a snapshot.
  • Delete a snapshot.
  • Create a share from snapshot.
  • Extend a share.
  • Shrink a share.
  • Share replication (experimental):
    • Create, update, delete, and promote replica operations are supported.
Possibilities
  • Any amount of ZFS zpools can be used by share driver.
  • Allowed to configure default options for ZFS datasets that are used for share creation.
  • Any amount of nested datasets is allowed to be used.
  • All share replicas are read-only, only active one is read-write.
  • All share replicas are synchronized periodically, not continuously. Status in_sync means latest sync was successful. Time range between syncs equals to the value of the replica_state_update_interval configuration global option.
  • Driver can use qualified extra spec zfsonlinux:compression. It can contain any value that ZFS app supports. But if it is disabled through the configuration option with the value compression=off, then it will not be used.
Restrictions

The ZFSonLinux share driver has the following restrictions:

  • Only IP access type is supported for NFS.
  • Only FLAT network is supported.
  • Promote share replica operation will switch roles of current secondary replica and active. It does not make more than one active replica available.
  • The below items are not yet implemented:
    • Manage share operation.
    • Manage snapshot operation.
    • SaMBa based sharing.
    • Thick provisioning capability.
Known problems
  • Promote share replica operation will make ZFS file system that became secondary as RO only on NFS level. On ZFS level system will stay mounted as was - RW.
Back-end configuration

The following parameters need to be configured in the manila configuration file for back-ends that use the ZFSonLinux driver:

  • share_driver = manila.share.drivers.zfsonlinux.driver.ZFSonLinuxShareDriver
  • driver_handles_share_servers = False
  • replication_domain = custom_str_value_as_domain_name
    • If empty, then replication will be disabled.
    • If set, then will be able to be used as replication peer for other back ends with the same value.
  • zfs_share_export_ip = <user_facing IP address of ZFS host>
  • zfs_service_ip = <IP address of service network interface of ZFS host>
  • zfs_zpool_list = zpoolname1,zpoolname2/nested_dataset_for_zpool2
    • Can be one or more zpools.
    • Can contain nested datasets.
  • zfs_dataset_creation_options = <list of ZFS dataset options>
    • readonly, quota, sharenfs and sharesmb options will be ignored.
  • zfs_dataset_name_prefix = <prefix>
    • Prefix to be used in each dataset name.
  • zfs_dataset_snapshot_name_prefix = <prefix>
    • Prefix to be used in each dataset snapshot name.
  • zfs_use_ssh = <boolean_value>
    • Set False if ZFS located on the same host as manila-share service.
    • Set True if manila-share service should use SSH for ZFS configuration.
  • zfs_ssh_username = <ssh_username>
    • Required for replication operations.
    • Required for SSH``ing to ZFS host if zfs_use_ssh is set to True.
  • zfs_ssh_user_password = <ssh_user_password>
    • Password for zfs_ssh_username of ZFS host.
    • Used only if zfs_use_ssh is set to True.
  • zfs_ssh_private_key_path = <path_to_private_ssh_key>
    • Used only if zfs_use_ssh is set to True.
  • zfs_share_helpers = NFS=manila.share.drivers.zfsonlinux.utils.NFSviaZFSHelper
    • Approach for setting up helpers is similar to various other share drivers.
    • At least one helper should be used.
  • zfs_replica_snapshot_prefix = <prefix>
    • Prefix to be used in dataset snapshot names that are created by update replica operation.
Driver options
Description of ZFS Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
zfs_dataset_creation_options = None (List) Define here list of options that should be applied for each dataset creation if needed. Example: compression=gzip,dedup=off. Note that, for secondary replicas option ‘readonly’ will be set to ‘on’ and for active replicas to ‘off’ in any way. Also, ‘quota’ will be equal to share size. Optional.
zfs_dataset_name_prefix = manila_share_ (String) Prefix to be used in each dataset name. Optional.
zfs_dataset_snapshot_name_prefix = manila_share_snapshot_ (String) Prefix to be used in each dataset snapshot name. Optional.
zfs_migration_snapshot_prefix = tmp_snapshot_for_share_migration_ (String) Set snapshot prefix for usage in ZFS migration. Required.
zfs_replica_snapshot_prefix = tmp_snapshot_for_replication_ (String) Set snapshot prefix for usage in ZFS replication. Required.
zfs_service_ip = None (String) IP to be added to admin-facing export location. Required.
zfs_share_export_ip = None (String) IP to be added to user-facing export location. Required.
zfs_share_helpers = NFS=manila.share.drivers.zfsonlinux.utils.NFSviaZFSHelper (List) Specify list of share export helpers for ZFS storage. It should look like following: ‘FOO_protocol=foo.FooClass,BAR_protocol=bar.BarClass’. Required.
zfs_ssh_private_key_path = None (String) Path to SSH private key that should be used for SSH’ing ZFS storage host. Not used for replication operations. Optional.
zfs_ssh_user_password = None (String) Password for user that is used for SSH’ing ZFS storage host. Not used for replication operations. They require passwordless SSH access. Optional.
zfs_ssh_username = None (String) SSH user that will be used in 2 cases: 1) By manila-share service in case it is located on different host than its ZFS storage. 2) By manila-share services with other ZFS backends that perform replication. It is expected that SSH’ing will be key-based, passwordless. This user should be passwordless sudoer. Optional.
zfs_use_ssh = False (Boolean) Remote ZFS storage hostname that should be used for SSH’ing. Optional.
zfs_zpool_list = None (List) Specify list of zpools that are allowed to be used by backend. Can contain nested datasets. Examples: Without nested dataset: ‘zpool_name’. With nested dataset: ‘zpool_name/nested_dataset_name’. Required.
EMC Isilon driver

The EMC Shared File Systems driver framework (EMCShareDriver) utilizes EMC storage products to provide shared file systems to OpenStack. The EMC driver is a plug-in based driver which is designed to use different plug-ins to manage different EMC storage products.

The Isilon driver is a plug-in for the EMC framework which allows the Shared File Systems service to interface with an Isilon back end to provide a shared filesystem. The EMC driver framework with the Isilon plug-in is referred to as the Isilon Driver in this document.

This Isilon Driver interfaces with an Isilon cluster via the REST Isilon Platform API (PAPI) and the RESTful Access to Namespace API (RAN).

Requirements
  • Isilon cluster running OneFS 7.2 or higher
Supported shared filesystems and operations

The drivers supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported.
    • Only read-write access is supported.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

Back end configuration

The following parameters need to be configured in the Shared File Systems service configuration file for the Isilon driver:

share_driver = manila.share.drivers.emc.driver.EMCShareDriver
emc_share_backend = isilon
emc_nas_server = <IP address of Isilon cluster>
emc_nas_login = <username>
emc_nas_password = <password>
Restrictions

The Isilon driver has the following restrictions:

  • Only IP access type is supported for NFS and CIFS.
  • Only FLAT network is supported.
  • Quotas are not yet supported.
Driver options

The following table contains the configuration options specific to the share driver.

Description of EMC Share Drivers configuration options
Configuration option = Default value Description
[DEFAULT]  
emc_interface_ports = None (List) Comma separated list specifying the ports that can be used for share server interfaces. Members of the list can be Unix-style glob expressions.
emc_nas_login = None (String) User name for the EMC server.
emc_nas_password = None (String) Password for the EMC server.
emc_nas_pool_names = None (List) EMC pool names.
emc_nas_root_dir = None (String) The root directory where shares will be located.
emc_nas_server = None (String) EMC server hostname or IP address.
emc_nas_server_container = None (String) Container of share servers.
emc_nas_server_pool = None (String) Pool to persist the meta-data of NAS server.
emc_nas_server_port = 8080 (Port number) Port number for the EMC server.
emc_nas_server_secure = True (Boolean) Use secure connection to server.
emc_share_backend = None (String) Share backend.
EMC VNX driver

The EMC Shared File Systems service driver framework (EMCShareDriver) utilizes the EMC storage products to provide the shared file systems to OpenStack. The EMC driver is a plug-in based driver which is designed to use different plug-ins to manage different EMC storage products.

The VNX plug-in is the plug-in which manages the VNX to provide shared filesystems. The EMC driver framework with the VNX plug-in is referred to as the VNX driver in this document.

This driver performs the operations on VNX by XMLAPI and the file command line. Each back end manages one Data Mover of VNX. Multiple Shared File Systems service back ends need to be configured to manage multiple Data Movers.

Requirements
  • VNX OE for File version 7.1 or higher
  • VNX Unified, File only, or Gateway system with a single storage back end
  • The following licenses should be activated on VNX for File:
    • CIFS
    • NFS
    • SnapSure (for snapshot)
    • ReplicationV2 (for create share from snapshot)
Supported shared filesystems and operations

The driver supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported for NFS.
    • Only user access type is supported for CIFS.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

While the generic driver creates shared filesystems based on cinder volumes attached to nova VMs, the VNX driver performs similar operations using the Data Movers on the array.

Pre-configurations on VNX
  1. Enable unicode on Data Mover.

    The VNX driver requires that the unicode is enabled on Data Mover.

    Warning

    After enabling Unicode, you cannot disable it. If there are some filesystems created before Unicode is enabled on the VNX, consult the storage administrator before enabling Unicode.

    To check the Unicode status on Data Mover, use the following VNX File command on the VNX control station:

    server_cifs <mover_name> | head
    # mover_name = <name of the Data Mover>
    

    Check the value of I18N mode field. UNICODE mode is shown as I18N mode = UNICODE.

    To enable the Unicode for Data Mover:

    uc_config -on -mover <mover_name>
    # mover_name = <name of the Data Mover>
    

    Refer to the document Using International Character Sets on VNX for File on EMC support site for more information.

  2. Enable CIFS service on Data Mover.

    Ensure the CIFS service is enabled on the Data Mover which is going to be managed by VNX driver.

    To start the CIFS service, use the following command:

    server_setup <mover_name> -Protocol cifs -option start [=<n>]
    # mover_name = <name of the Data Mover>
    # n = <number of threads for CIFS users>
    

    Note

    If there is 1 GB of memory on the Data Mover, the default is 96 threads; however, if there is over 1 GB of memory, the default number of threads is 256.

    To check the CIFS service status, use this command:

    server_cifs <mover_name> | head
    # mover_name = <name of the Data Mover>
    

    The command output will show the number of CIFS threads started.

  3. NTP settings on Data Mover.

    VNX driver only supports CIFS share creation with share network which has an Active Directory security-service associated.

    Creating CIFS share requires that the time on the Data Mover is in sync with the Active Directory domain so that the CIFS server can join the domain. Otherwise, the domain join will fail when creating share with this security service. There is a limitation that the time of the domains used by security-services even for different tenants and different share networks should be in sync. Time difference should be less than 10 minutes.

    It is recommended to set the NTP server to the same public NTP server on both the Data Mover and domains used in security services to ensure the time is in sync everywhere.

    Check the date and time on Data Mover:

    server_date <mover_name>
    # mover_name = <name of the Data Mover>
    

    Set the NTP server for Data Mover:

    server_date <mover_name> timesvc start ntp <host> [<host> ...]
    # mover_name = <name of the Data Mover>
    # host = <IP address of the time server host>
    

    Note

    The host must be running the NTP protocol. Only 4 host entries are allowed.

  4. Configure User Mapping on the Data Mover.

    Before creating CIFS share using VNX driver, you must select a method of mapping Windows SIDs to UIDs and GIDs. EMC recommends using usermapper in single protocol (CIFS) environment which is enabled on VNX by default.

    To check usermapper status, use this command syntax:

    server_usermapper <movername>
    # movername = <name of the Data Mover>
    

    If usermapper is not started, the following command can be used to start the usermapper:

    server_usermapper <movername> -enable
    # movername = <name of the Data Mover>
    

    For a multiple protocol environment, refer to Configuring VNX User Mapping on EMC support site for additional information.

  5. Network Connection.

    Find the network devices (physical port on NIC) of Data Mover that has access to the share network.

    Go to Unisphere to check the device list: Settings > Network > Settings for File (Unified system only) > Device.

Back-end configurations

The following parameters need to be configured in the /etc/manila/manila.conf file for the VNX driver:

emc_share_backend = vnx
emc_nas_server = <IP address>
emc_nas_password = <password>
emc_nas_login = <user>
emc_nas_server_container = <Data Mover name>
emc_nas_pool_names = <Comma separated pool names>
share_driver = manila.share.drivers.emc.driver.EMCShareDriver
emc_interface_ports = <Comma separated ports list>
  • emc_share_backend

    The plug-in name. Set it to vnx for the VNX driver.

  • emc_nas_server

    The control station IP address of the VNX system to be managed.

  • emc_nas_password and emc_nas_login

    The fields that are used to provide credentials to the VNX system. Only local users of VNX File is supported.

  • emc_nas_server_container

    Name of the Data Mover to serve the share service.

  • emc_nas_pool_names

    Comma separated list specifying the name of the pools to be used by this back end. Do not set this option if all storage pools on the system can be used. Wild card character is supported.

    Examples: pool_1, pool_*, *

  • emc_interface_ports

    Comma separated list specifying the ports (devices) of Data Mover that can be used for share server interface. Do not set this option if all ports on the Data Mover can be used. Wild card character is supported.

    Examples: spa_eth1, spa_*, *

Restart of the manila-share service is needed for the configuration changes to take effect.

Restrictions

The VNX driver has the following restrictions:

  • Only IP access type is supported for NFS.
  • Only user access type is supported for CIFS.
  • Only FLAT network and VLAN network are supported.
  • VLAN network is supported with limitations. The neutron subnets in different VLANs that are used to create share networks cannot have overlapped address spaces. Otherwise, VNX may have a problem to communicate with the hosts in the VLANs. To create shares for different VLANs with same subnet address, use different Data Movers.
  • The Active Directory security service is the only supported security service type and it is required to create CIFS shares.
  • Only one security service can be configured for each share network.
  • Active Directory domain name of the ‘active_directory’ security service should be unique even for different tenants.
  • The time on Data Mover and the Active Directory domains used in security services should be in sync (time difference should be less than 10 minutes). It is recommended to use same NTP server on both the Data Mover and Active Directory domains.
  • On VNX the snapshot is stored in the SavVols. VNX system allows the space used by SavVol to be created and extended until the sum of the space consumed by all SavVols on the system exceeds the default 20% of the total space available on the system. If the 20% threshold value is reached, an alert will be generated on VNX. Continuing to create snapshot will cause the old snapshot to be inactivated (and the snapshot data to be abandoned). The limit percentage value can be changed manually by storage administrator based on the storage needs. Administrator is recommended to configure the notification on the SavVol usage. Refer to Using VNX SnapSure document on EMC support site for more information.
  • VNX has limitations on the overall numbers of Virtual Data Movers, filesystems, shares, checkpoints, etc. Virtual Data Mover(VDM) is created by the VNX driver on the VNX to serve as the Shared File Systems service share server. Similarly, filesystem is created, mounted, and exported from the VDM over CIFS or NFS protocol to serve as the Shared File Systems service share. The VNX checkpoint serves as the Shared File Systems service share snapshot. Refer to the NAS Support Matrix document on EMC support site for the limitations and configure the quotas accordingly.
Driver options

Configuration options specific to this driver are documented in Description of EMC Share Drivers configuration options.

EMC Unity driver

The EMC Shared File Systems service driver framework (EMCShareDriver) utilizes the EMC storage products to provide the shared file systems to OpenStack. The EMC driver is a plug-in based driver which is designed to use different plug-ins to manage different EMC storage products.

The Unity plug-in manages the Unity system to provide shared filesystems. The EMC driver framework with the Unity plug-in is referred to as the Unity driver in this document.

This driver performs the operations on Unity through RESTful APIs. Each back end manages one Storage Processor of Unity. Configure multiple Shared File Systems service back ends to manage multiple Unity systems.

Requirements
  • Unity OE 4.0.1 or higher.
  • StorOps 0.2.17 or higher is installed on Manila node.
  • Following licenses are activated on Unity:
    • CIFS/SMB Support
    • Network File System (NFS)
    • Thin Provisioning
Supported shared filesystems and operations

The driver supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.
  • Delete a share.
  • Allow share access.
  • Deny share access.
  • Create a snapshot.
  • Delete a snapshot.
  • Create a share from a snapshot.
  • Extend a share.
Supported network types
  • Flat
  • VLAN
Pre-configurations
On manila node

Python library storops is required to run Unity driver. Install it with the pip command. You may need root privilege to install python libraries.

pip install storops
On Unity system
  1. Configure system level NTP server.

    Open Unisphere of your Unity system and navigate to:

    Unisphere -> Settings -> Management -> System Time and NTP
    

    Select Enable NTP synchronization and add your NTP server(s).

    The time on the Unity system and the Active Directory domains used in security services should be in sync. We recommend using the same NTP server on both the Unity system and Active Directory domains.

  2. Configure system level DNS server.

    Open Unisphere of your Unity system and navigate to:

    Unisphere -> Settings -> Management -> DNS Server
    

    Select Configure DNS server address manually and add your DNS server(s).

Back end configurations

Following configurations need to be configured in /etc/manila/manila.conf for the Unity driver.

share_driver = manila.share.drivers.emc.driver.EMCShareDriver
emc_share_backend = unity
emc_nas_server = <management IP address of the Unity system>
emc_nas_login = <user with administrator privilege>
emc_nas_password = <password>
emc_nas_server_container = [SPA|SPB]
emc_nas_server_pool = <pool name>
emc_nas_pool_names = <comma separated pool names>
emc_interface_ports = <comma separated ports list>
driver_handles_share_servers = True
  • emc_share_backend

    The plugin name. Set it to unity for the Unity driver.

  • emc_nas_server

    The management IP for Unity.

  • emc_nas_server_container

    The SP to be used as NAS server container.

  • emc_nas_server_pool

    The name of the pool to persist the meta-data of NAS server.

  • emc_nas_pool_names

    Comma separated list specifying the name of the pools to be used by this back end. Do not set this option if all storage pools on the system can be used. Wild card character is supported.

    Examples: pool_1, pool_*, *

  • emc_interface_ports

    Comma separated list specifying the ethernet ports of Unity system that can be used for share. Do not set this option if all ethernet ports can be used. Wild card character is supported.

    Examples: spa_eth1, spa_*, *

  • driver_handles_share_servers

    Unity driver requires this option to be as True.

Restart of manila-share service is needed for the configuration changes to take effect.

Restrictions

The Unity driver has following restrictions.

  • EMC Unity does not support the same IP in different VLANs.
  • Only Active Directory security service is supported and it is required to create CIFS shares.
Driver options

Configuration options specific to this driver are documented in Description of EMC Share Drivers configuration options.

Hitachi NAS (HNAS) driver

The HNAS driver provides NFS Shared File Systems to OpenStack.

Requirements
  • Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
  • HNAS/SMU software version is 12.2 or higher.
  • HNAS configuration and management utilities to create a storage pool (span) and an EVS.
    • GUI (SMU).
    • SSC CLI.
Supported shared filesystems and operations

The driver supports NFS shares.

The following operations are supported:

  • Create a share.
  • Delete a share.
  • Allow share access.
  • Deny share access.
  • Create a snapshot.
  • Delete a snapshot.
  • Create a share from a snapshot.
  • Extend a share.
  • Manage a share.
  • Unmanage a share.
  • Shrink a share.
Driver options

This table contains the configuration options specific to the share driver.

Description of HDS NAS Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hitachi_hnas_allow_cifs_snapshot_while_mounted = False (Boolean) By default, CIFS snapshots are not allowed to be taken when the share has clients connected because consistent point-in-time replica cannot be guaranteed for all files. Enabling this might cause inconsistent snapshots on CIFS shares.
hitachi_hnas_cluster_admin_ip0 = None (String) The IP of the clusters admin node. Only set in HNAS multinode clusters.
hitachi_hnas_driver_helper = manila.share.drivers.hitachi.hnas.ssh.HNASSSHBackend (String) Python class to be used for driver helper.
hitachi_hnas_evs_id = None (Integer) Specify which EVS this backend is assigned to.
hitachi_hnas_evs_ip = None (String) Specify IP for mounting shares.
hitachi_hnas_file_system_name = None (String) Specify file-system name for creating shares.
hitachi_hnas_ip = None (String) HNAS management interface IP for communication between Manila controller and HNAS.
hitachi_hnas_password = None (String) HNAS user password. Required only if private key is not provided.
hitachi_hnas_ssh_private_key = None (String) RSA/DSA private key value used to connect into HNAS. Required only if password is not provided.
hitachi_hnas_stalled_job_timeout = 30 (Integer) The time (in seconds) to wait for stalled HNAS jobs before aborting.
hitachi_hnas_user = None (String) HNAS username Base64 String in order to perform tasks such as create file-systems and network interfaces.
[hnas1]  
share_backend_name = None (String) The backend name for a given driver implementation.
share_driver = manila.share.drivers.generic.GenericShareDriver (String) Driver to use for share creation.
Pre-configuration on OpenStack deployment
  1. Install the OpenStack environment with manila. See the OpenStack installation guide.

  2. Configure the OpenStack networking so it can reach HNAS Management interface and HNAS EVS Data interface.

    Note

    In the driver mode used by HNAS Driver (DHSS = False), the driver does not handle network configuration, it is up to the administrator to configure it.

    • Configure the network of the manila-share node network to reach HNAS management interface through the admin network.

      Note

      The manila-share node only requires the HNAS EVS data interface if you plan to use share migration.

    • Configure the network of the Compute and Networking nodes to reach HNAS EVS data interface through the data network.

    • Example of networking architecture:

      Example networking scenario
    • Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and update the following settings in their respective tags. In case you use linuxbridge, update bridge mappings at linuxbridge section:

    Important

    It is mandatory that HNAS management interface is reachable from the Shared File System node through the admin network, while the selected EVS data interface is reachable from OpenStack Cloud, such as through Neutron flat networking.

    [ml2]
    type_drivers = flat,vlan,vxlan,gre
    mechanism_drivers = openvswitch
    [ml2_type_flat]
    flat_networks = physnet1,physnet2
    [ml2_type_vlan]
    network_vlan_ranges = physnet1:1000:1500,physnet2:2000:2500
    [ovs]
    bridge_mappings = physnet1:br-ex,physnet2:br-eth1
    

    You may have to repeat the last line above in another file on the Compute node, if it exists it is located in: /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.

    • In case openvswitch for neutron agent, run in network node:

      # ifconfig eth1 0
      # ovs-vsctl add-br br-eth1
      # ovs-vsctl add-port br-eth1 eth1
      # ifconfig eth1 up
      
    • Restart all neutron processes.

  3. Create the data HNAS network in OpenStack:

    • List the available tenants:

      $ openstack project list
      
    • Create a network to the given tenant (demo), providing the tenant ID, a name for the network, the name of the physical network over which the virtual network is implemented, and the type of the physical mechanism by which the virtual network is implemented:

      $ neutron net-create --tenant-id <DEMO_ID> hnas_network \
      --provider:physical_network=physnet2 --provider:network_type=flat
      
    • Optional - List available networks:

      $ neutron net-list
      
    • Create a subnet to the same tenant (demo), the gateway IP of this subnet, a name for the subnet, the network ID created before, and the CIDR of subnet:

      $ neutron subnet-create --tenant-id <DEMO_ID> --gateway <GATEWAY> \
      --name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
      
    • OPTIONAL - List available subnets:

      $ neutron subnet-list
      
    • Add the subnet interface to a router, providing the router ID and subnet ID created before:

      $ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
      
Pre-configuration on HNAS
  1. Create a file system on HNAS. See the Hitachi HNAS reference.

    Important

    Make sure that the filesystem is not created as a replication target. Refer official HNAS administration guide.

  2. Prepare the HNAS EVS network.

    • Create a route in HNAS to the tenant network:

      $ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \
      <TENANT_PRIVATE_NETWORK>
      

      Important

      Make sure multi-tenancy is enabled and routes are configured per EVS.

      $ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
      10.0.0.0/24
      
Back end configuration
  1. Configure HNAS driver.

    • Configure HNAS driver according to your environment. This example shows a minimal HNAS driver configuration:

      [DEFAULT]
      enabled_share_backends = hnas1
      enabled_share_protocols = NFS
      [hnas1]
      share_backend_name = HNAS1
      share_driver = manila.share.drivers.hitachi.hds_hnas.HDSHNASDriver
      driver_handles_share_servers = False
      hds_hnas_ip = 172.24.44.15
      hds_hnas_user = supervisor
      hds_hnas_password = supervisor
      hds_hnas_evs_id = 1
      hds_hnas_evs_ip = 10.0.1.20
      hds_hnas_file_system_name = FS-Manila
      
  2. Optional - HNAS multi-backend configuration.

    • Update the enabled_share_backends flag with the names of the back ends separated by commas.

    • Add a section for every back end according to the example bellow:

      [DEFAULT]
      enabled_share_backends = hnas1,hnas2
      enabled_share_protocols = NFS
      [hnas1]
      share_backend_name = HNAS1
      share_driver = manila.share.drivers.hitachi.hds_hnas.HDSHNASDriver
      driver_handles_share_servers = False
      hds_hnas_ip = 172.24.44.15
      hds_hnas_user = supervisor
      hds_hnas_password = supervisor
      hds_hnas_evs_id = 1
      hds_hnas_evs_ip = 10.0.1.20
      hds_hnas_file_system_name = FS-Manila1
      [hnas2]
      share_backend_name = HNAS2
      share_driver = manila.share.drivers.hitachi.hds_hnas.HDSHNASDriver
      driver_handles_share_servers = False
      hds_hnas_ip = 172.24.44.15
      hds_hnas_user = supervisor
      hds_hnas_password = supervisor
      hds_hnas_evs_id = 1
      hds_hnas_evs_ip = 10.0.1.20
      hds_hnas_file_system_name = FS-Manila2
      
  3. Disable DHSS for HNAS share type configuration:

    Note

    Shared File Systems requires that the share type includes the driver_handles_share_servers extra-spec. This ensures that the share will be created on a backend that supports the requested driver_handles_share_servers capability.

    $ manila type-create hitachi False
    
  4. (Optional multiple back end) Create an extra-spec for specifying which HNAS back end will be created by the share:

    • Create additional share types.

      $ manila type-create hitachi2 False
      
    • Add an extra-spec for each share-type in order to match a specific back end. Therefore, it is possible to specify which back end the Shared File System service will use when creating a share.

      $ manila type-key hitachi set share_backend_name=hnas1
      $ manila type-key hitachi2 set share_backend_name=hnas2
      
  5. Restart all Shared File Systems services (manila-share, manila-scheduler and manila-api).

Manage and unmanage shares

Shared File Systems has the ability to manage and unmanage shares. If there is a share in the storage and it is not in OpenStack, you can manage that share and use it as a Shared File Systems share. HNAS drivers use virtual-volumes (V-VOL) to create shares. Only V-VOL shares can be used by the driver. If the NFS export is an ordinary FS export, it is not possible to use it in Shared File Systems. The unmanage operation only unlinks the share from Shared File Systems. All data is preserved.

Additional notes
  • HNAS has some restrictions about the number of EVSs, filesystems, virtual-volumes, and simultaneous SSC connections. Check the manual specification for your system.
  • Shares and snapshots are thin provisioned. It is reported to Shared File System only the real used space in HNAS. Also, a snapshot does not initially take any space in HNAS, it only stores the difference between the share and the snapshot, so it grows when share data is changed.
  • Administrators should manage the tenant’s quota (manila quota-update) to control the back-end usage.
HPE 3PAR driver

The HPE 3PAR driver provides NFS and CIFS shared file systems to OpenStack using HPE 3PAR’s File Persona capabilities.

Supported shared filesystems and operations

The driver supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations for NFS shares:

    • IP access rules are required for NFS.
    • Shares created from snapshots are always read-only.
    • Shares not created from snapshots are read-write and subject to ACLs.

    Note the following limitations for CIFS shares:

    • SMB shares require user access rules.
    • User access requires a 3PAR local or AD user, since LDAP is not yet supported.
    • Shares created from snapshots are always read-only.
    • Shares not created from snapshots are read-write (and subject to ACLs).
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

    Note the following limitations for shares:

    • Shares created from snapshots are always read-only.
  • Extend a share.

  • Shrink a share.

Share networks are not supported. Shares are created directly on the 3PAR without the use of a share server or service VM. Network connectivity is setup outside of the Shared File Systems service.

Requirements

On the system running the manila-share service:

  • python-3parclient version 4.2.0 or newer from PyPI.

On the HPE 3PAR array:

  • HPE 3PAR Operating System software version 3.2.1 MU3 or higher.
  • A license that enables the File Persona feature.
  • The array class and hardware configuration must support File.
Pre-configuration on the HPE 3PAR
  • HPE 3PAR File Persona must be initialized and started (startfs).
  • A File Provisioning Group (FPG) must be created for use with the Shared File Systems service.
  • A Virtual File Server (VFS) must be created for the FPG.
  • The VFS must be configured with an appropriate share export IP address.
  • A local user in the Administrators group is needed for CIFS shares.
Back end configuration

The following parameters need to be configured in the Shared File Systems service configuration file for the HPE 3PAR driver:

share_driver = manila.share.drivers.hpe.hpe_3par_driver.HPE3ParShareDriver
driver_handles_share_servers = False
hpe3par_share_ip_address = IP_address

The option hpe3par_share_ip_address must be a valid IP address for the configured FPG’s VFS. This IP address is used in export locations for shares that are created. Networking must be configured to allow connectivity from clients to shares.

Back end configuration for AD user

The following parameters need to be configured through HPE 3PAR CLI to access file share using AD.

  1. Set authentication parameters.

    $ setauthparam ldap-server IP_ADDRESS_OF_AD_SERVER
    $ setauthparam binding simple
    $ setauthparam user-attr AD_DOMAIN_NAME\\
    $ setauthparam accounts-dn CN=Users,DC=AD,DC=DOMAIN,DC=NAME
    $ setauthparam account-obj user
    $ setauthparam account-name-attr sAMAccountName
    $ setauthparam memberof-attr memberOf
    $ setauthparam super-map CN=AD_USER_GROUP,DC=AD,DC=DOMAIN,DC=NAME
    
  2. Verify new authentication parameters set as expected.

    $ showauthparam
    
  3. Verify AD users set as expected.

    $ checkpassword AD_USER
    

    On successful configuration, command result will display: User AD_USER is authenticated and authorized.

  4. Add ActiveDirectory in authentication providers list.

    $ setfs auth ActiveDirectory Local
    
  5. Verify authentication provider list shows ActiveDirectory.

    $ showfs -auth
    
  6. Set AD user on FS.

    $ setfs ad –passwd PASSWORD AD_USER AD_DOMAIN_NAME
    
  7. Verify FS user details.

    $ showfs -ad
    
Example of using AD user to access CIFS share

Pre-requisite:

  • Share type should be configured for 3PAR backend.
  1. Create a CIFS file share with 2GB of size.

    $ manila create --name FILE_SHARE_NAME --share-type SHARE_TYPE CIFS 2
    
  2. Check that the file share was created as expected.

    $ manila show FILE_SHARE_NAME
    
  3. Provide share access to AD user.

    $ manila access-allow FILE_SHARE_NAME user AD_DOMAIN_NAME\\\\AD_USER \
      --access-level rw
    
  4. Check that the AD user’s permission set as expected.

    $ manila access-list FILE_SHARE_NAME
    

    List should display AD_DOMAIN_NAME\\AD_USER in the access_to column, and active in its state column as a result of this command.

Network approach

Connectivity between the storage array (SSH/CLI and WSAPI) and the Shared File Systems service host is required for share management.

Connectivity between the clients and the VFS is required for mounting and using the shares. This includes:

  • Routing from the client to the external network.
  • Assigning the client an external IP address, for example a floating IP.
  • Configuring the Shared File Systems service host networking properly for IP forwarding.
  • Configuring the VFS networking properly for client subnets.
Share types

When creating a share, a share type can be specified to determine where and how the share will be created. If a share type is not specified, the value from the option default_share_type set in the Shared File Systems service configuration file is used.

The Shared File Systems service requires that the share type includes the driver_handles_share_servers extra-spec. This ensures that the share will be created on a back end that supports the requested driver_handles_share_servers (share networks) capability. For the HPE 3PAR driver, this must be set to False.

Another common Shared File Systems service extra-spec used to determine where a share is created is share_backend_name. When this extra-spec is defined in the share type, the share will be created on a back end with a matching share_backend_name.

The HPE 3PAR driver automatically reports capabilities based on the FPG used for each back end. Share types with extra specs can be created by an administrator to control which share types are allowed to use FPGs with or without specific capabilities. The following extra-specs are used with the capabilities filter and the HPE 3PAR driver:

hpe3par_flash_cache
This is True for back ends that have 3PAR’s Adaptive Flash Cache enabled.
thin_provisioning
This is True for back ends that use thin provisioned volumes. For FPGs that use fully provisioned volumes this is False. Back ends that use thin provisioning also support the Shared File Systems service’s over-subscription feature.
dedupe
This is True for back ends that use deduplication technology.

Each can either be <is> True or <is> False.

Scoped extra-specs are used to influence vendor-specific implementation details. Scoped extra-specs use a prefix followed by a colon. For HPE 3PAR these extra-specs have a prefix of hpe3par.

The following HPE 3PAR extra-specs are used when creating CIFS (SMB) shares:

hpe3par:smb_access_based_enum

Valid values are true or false.

This handles access-based enumeration and specifies if users can see only the files and directories to which they have been allowed access on the shares. The default is false.

hpe3par:smb_continuous_avail

Valid values are true or false.

This handles continuous availability and specifies if SMB3 continuous availability features should be enabled for this share. If not specified, the default is true.

hpe3par:smb_cache

This specifies client-side caching for offline files. Valid values are:

off
The client must not cache any files from this share. The share is configured to disallow caching.
manual
The client must allow only manual caching for the files open from this share. This is the default.
optimized
The client may cache every file that it opens from this share. Also, the client may satisfy the file requests from its local cache. The share is configured to allow automatic caching of programs and documents.
auto
The client may cache every file that it opens from this share. The share is configured to allow automatic caching of documents.

The following HPE 3PAR extra-specs are used when creating NFS shares:

hpe3par:nfs_options

This is a comma-separated list of NFS export options.

The NFS export options have the following limitations:

  • ro and rw are not allowed because the value will be determined by the driver.
  • no_subtree_check and fsid are not allowed per HPE 3PAR CLI support.
  • secure, insecure, root_squash, and no_root_squash are not allowed because the HPE 3PAR driver controls those settings.

All other NFS options are forwarded to the HPE 3PAR as part of share creation. The HPE 3PAR will do additional validation at share creation time. Refer to the HPE 3PAR CLI help for more details.

Driver options

The following table contains the configuration options specific to the share driver.

Description of HPE 3PAR Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
hpe3par_api_url = (String) 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1
hpe3par_cifs_admin_access_domain = LOCAL_CLUSTER (String) File system domain for the CIFS admin user.
hpe3par_cifs_admin_access_password = (String) File system admin password for CIFS.
hpe3par_cifs_admin_access_username = (String) File system admin user name for CIFS.
hpe3par_debug = False (Boolean) Enable HTTP debugging to 3PAR
hpe3par_fpg = None (Unknown) The File Provisioning Group (FPG) to use
hpe3par_fstore_per_share = False (Boolean) Use one filestore per share
hpe3par_password = (String) 3PAR password for the user specified in hpe3par_username
hpe3par_require_cifs_ip = False (Boolean) Require IP access rules for CIFS (in addition to user)
hpe3par_san_ip = (String) IP address of SAN controller
hpe3par_san_login = (String) Username for SAN controller
hpe3par_san_password = (String) Password for SAN controller
hpe3par_san_ssh_port = 22 (Port number) SSH port to use with SAN
hpe3par_share_mount_path = /mnt/ (String) The path where shares will be mounted when deleting nested file trees.
hpe3par_username = (String) 3PAR username with the ‘edit’ role
Set up the HPE 3PAR environment
  1. Install the python-3parclient python package on the OpenStack Block Storage system.

    $ pip install 'python-3parclient>=4.0,<5.0'
    
  2. Verify that the HPE 3PAR web services API server is enabled and running on the HPE 3PAR storage system.

    1. Log in to the HPE 3PAR storage system as an administrator.

      $ ssh 3paradm@<HPE 3PAR IP Address>
      
    2. View the current state of the Web Services API Server.

      $ showwsapi
      -Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version-
      Enabled   Active Enabled       8008        Enabled       8080       1.1
      
    3. If the web services API Server is disabled, start it.

      $ startwsapi
      
  3. If the HTTP or HTTPS state is disabled, enable one of them.

    $ setwsapi -http enable
    

    or, for HTTPS

    $ setwsapi -https enable
    

    Note

    To stop the Web Services API Server, use the stopwsapi command. For other options, run the setwsapi –h command.

Delete nested shares

When a nested share is deleted (nested shares will be created when hpe_3par_fstore_per_share is set to False), the file tree also attempts to be deleted.

With NFS shares, there is no additional configuration that needs to be done.

For CIFS shares, hpe3par_cifs_admin_access_username and hpe3par_cifs_admin_access_password must be provided. If they are omitted, the original functionality is honored and the file tree remains untouched. hpe3par_cifs_admin_access_domain and hpe3par_share_mount_path can also be specified to create further customization. For more information on these configuration values, see Driver options.

Huawei driver

Huawei NAS driver is a plug-in based on the Shared File Systems service. The Huawei NAS driver can be used to provide functions such as the share and snapshot for virtual machines, or instances, in OpenStack. Huawei NAS driver enables the OceanStor V3 series V300R002 storage system to provide only network filesystems for OpenStack.

Requirements
  • The OceanStor V3 series V300R002 storage system.
  • The following licenses should be activated on V3 for File: CIFS, NFS, HyperSnap License (for snapshot).
Supported shared filesystems and operations

The driver supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported for NFS.
    • Only user access is supported for CIFS.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Support pools in one backend.

  • Extend a share.

  • Shrink a share.

  • Create a replica.

  • Delete a replica.

  • Promote a replica.

  • Update a replica state.

Pre-configurations on Huawei
  1. Create a driver configuration file. The driver configuration file name must be the same as the manila_huawei_conf_file item in the manila_conf configuration file.

  2. Configure the product. Product indicates the storage system type. For the OceanStor V3 series V300R002 storage systems, the driver configuration file is as follows:

    <?xml version='1.0' encoding='UTF-8'?>
    <Config>
        <Storage>
            <Product>V3</Product>
            <LogicalPortIP>x.x.x.x</LogicalPortIP>
            <RestURL>https://x.x.x.x:8088/deviceManager/rest/</RestURL>
            <UserName>xxxxxxxxx</UserName>
            <UserPassword>xxxxxxxxx</UserPassword>
        </Storage>
        <Filesystem>
            <Thin_StoragePool>xxxxxxxxx</Thin_StoragePool>
            <Thick_StoragePool>xxxxxxxxx</Thick_StoragePool>
            <WaitInterval>3</WaitInterval>
            <Timeout>60</Timeout>
        </Filesystem>
    </Config>
    

    The options are:

    • Product is a type of storage product. Set it to V3.
    • LogicalPortIP is the IP address of the logical port.
    • RestURL is an access address of the REST interface. Multiple RestURLs can be configured in <RestURL>, separated by ”;”. The driver will automatically retry another RestURL if one fails to connect.
    • UserName is the user name of an administrator.
    • UserPassword is the password of an administrator.
    • Thin_StoragePool is the name of a thin storage pool to be used.
    • Thick_StoragePool is the name of a thick storage pool to be used.
    • WaitInterval is the interval time of querying the file system status.
    • Timeout is the timeout period for waiting command execution of a device to complete.
Back end configuration

Modify the manila.conf Shared File Systems service configuration file and add share_driver and manila_huawei_conf_file items. Here is an example for configuring a storage system:

share_driver = manila.share.drivers.huawei.huawei_nas.HuaweiNasDriver
manila_huawei_conf_file = /etc/manila/manila_huawei_conf.xml
driver_handles_share_servers = False
Driver options

The following table contains the configuration options specific to the share driver.

Description of Huawei Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
manila_huawei_conf_file = /etc/manila/manila_huawei_conf.xml (String) The configuration file for the Manila Huawei driver.
IBM GPFS driver

The GPFS driver uses IBM General Parallel File System (GPFS), a high-performance, clustered file system, developed by IBM, as the storage back end for serving file shares to the Shared File Systems service clients.

Supported shared filesystems and operations

The driver supports NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported.
    • Only read-write access level is supported.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

Requirements
  • Install GPFS with server license, version >= 2.0, on the storage back end.
  • Install Kernel NFS or Ganesha NFS server on the storage back-end servers.
  • If using Ganesha NFS, currently NFS Ganesha v1.5 and v2.0 are supported.
  • Create a GPFS cluster and create a filesystem on the cluster, that will be used to create the Shared File Systems service shares.
  • Enable quotas for the GPFS file system, use mmchfs -Q yes.
  • Establish network connection between the Shared File Systems service host and the storage back end.
Shared File Systems service driver configuration setting

The following parameters in the Shared File Systems service configuration file need to be set:

share_driver = manila.share.drivers.ibm.gpfs.GPFSShareDriver
gpfs_share_export_ip = <IP to be added to GPFS export string>

If the back-end GPFS server is not running on the Shared File Systems service host machine, the following options are required to SSH to the remote GPFS back-end server:

gpfs_ssh_login = <GPFS server SSH login name>

Also one of the following settings is required to execute commands over SSH:

gpfs_ssh_private_key = <path to GPFS server SSH private key for login>

or:

gpfs_ssh_password = <GPFS server SSH login password>
Known restrictions
  • The driver does not support a segmented-network multi-tenancy model but instead works over a flat network where the tenants share a network.
  • While using remote GPFS node, with Ganesha NFS, gpfs_ssh_private_key for remote login to the GPFS node must be specified and there must be a passwordless authentication already setup between the manila-share service and the remote GPFS node.
Driver options

The following table contains the configuration options specific to the share driver.

Description of IBM GPFS Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
gpfs_mount_point_base = $state_path/mnt (String) Base folder where exported shares are located.
gpfs_nfs_server_list = None (List) A list of the fully qualified NFS server names that make up the OpenStack Manila configuration.
gpfs_nfs_server_type = KNFS (String) NFS Server type. Valid choices are “KNFS” (kernel NFS) or “CES” (Ganesha NFS).
gpfs_share_export_ip = None (String) IP to be added to GPFS export string.
gpfs_share_helpers = KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper, CES=manila.share.drivers.ibm.gpfs.CESHelper (List) Specify list of share export helpers.
gpfs_ssh_login = None (String) GPFS server SSH login name.
gpfs_ssh_password = None (String) GPFS server SSH login password. The password is not needed, if ‘gpfs_ssh_private_key’ is configured.
gpfs_ssh_port = 22 (Port number) GPFS server SSH port.
gpfs_ssh_private_key = None (String) Path to GPFS server SSH private key for login.
is_gpfs_node = False (Boolean) True:when Manila services are running on one of the Spectrum Scale node. False:when Manila services are not running on any of the Spectrum Scale node.
knfs_export_options = rw,sync,no_root_squash,insecure,no_wdelay,no_subtree_check (String) DEPRECATED: Options to use when exporting a share using kernel NFS server. Note that these defaults can be overridden when a share is created by passing metadata with key name export_options. This option isn’t used any longer. Please use share-type extra specs for export options.
NetApp Clustered Data ONTAP driver

The Shared File Systems service can be configured to use NetApp clustered Data ONTAP version 8.

Network approach

L3 connectivity between the storage cluster and Shared File Systems service host should exist, and VLAN segmentation should be configured.

The clustered Data ONTAP driver creates storage virtual machines (SVM, previously known as vServers) as representations of the Shared File Systems service share server interface, configures logical interfaces (LIFs) and stores shares there.

Supported shared filesystems and operations

The driver supports CIFS and NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported for NFS.
    • Only user access type is supported for CIFS.
  • Deny share access.

  • Create a snapshot.

  • Delete a snapshot.

  • Create a share from a snapshot.

  • Extend a share.

  • Shrink a share.

  • Create a consistency group.

  • Delete a consistency group.

  • Create a consistency group snapshot.

  • Delete a consistency group snapshot.

Required licenses
  • NFS
  • CIFS
  • FlexClone
Known restrictions
  • For CIFS shares an external active directory service is required. Its data should be provided via security-service that is attached to used share-network.
  • Share access rule by user for CIFS shares can be created only for existing user in active directory.
  • To be able to configure clients to security services, the time on these external security services and storage should be synchronized. The maximum allowed clock skew is 5 minutes.
Driver options

The following table contains the configuration options specific to the share driver.

Description of NetApp Share Drivers configuration options
Configuration option = Default value Description
[DEFAULT]  
netapp_aggregate_name_search_pattern = (.*) (String) Pattern for searching available aggregates for provisioning.
netapp_enabled_share_protocols = nfs3, nfs4.0 (List) The NFS protocol versions that will be enabled. Supported values include nfs3, nfs4.0, nfs4.1. This option only applies when the option driver_handles_share_servers is set to True.
netapp_lif_name_template = os_%(net_allocation_id)s (String) Logical interface (LIF) name template
netapp_login = None (String) Administrative user account name used to access the storage system.
netapp_password = None (String) Password for the administrative user account specified in the netapp_login option.
netapp_port_name_search_pattern = (.*) (String) Pattern for overriding the selection of network ports on which to create Vserver LIFs.
netapp_root_volume = root (String) Root volume name.
netapp_root_volume_aggregate = None (String) Name of aggregate to create Vserver root volumes on. This option only applies when the option driver_handles_share_servers is set to True.
netapp_server_hostname = None (String) The hostname (or IP address) for the storage system.
netapp_server_port = None (Port number) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS.
netapp_snapmirror_quiesce_timeout = 3600 (Integer) The maximum time in seconds to wait for existing snapmirror transfers to complete before aborting when promoting a replica.
netapp_storage_family = ontap_cluster (String) The storage family type used on the storage system; valid values include ontap_cluster for using clustered Data ONTAP.
netapp_trace_flags = None (String) Comma-separated list of options that control which trace info is written to the debug logs. Values include method and api.
netapp_transport_type = http (String) The transport protocol used when communicating with the storage system or proxy server. Valid values are http or https.
netapp_volume_name_template = share_%(share_id)s (String) NetApp volume name template.
netapp_volume_snapshot_reserve_percent = 5 (Integer) The percentage of share space set aside as reserve for snapshot usage; valid values range from 0 to 90.
netapp_vserver_name_template = os_%s (String) Name template to use for new Vserver.
Quobyte Driver

Quobyte can be used as a storage back end for the OpenStack Shared File System service. Shares in the Shared File System service are mapped 1:1 to Quobyte volumes. Access is provided via NFS protocol and IP-based authentication. The Quobyte driver uses the Quobyte API service.

Supported shared filesystems and operations

The drivers supports NFS shares.

The following operations are supported:

  • Create a share.

  • Delete a share.

  • Allow share access.

    Note the following limitations:

    • Only IP access type is supported.
  • Deny share access.

Driver options

The following table contains the configuration options specific to the share driver.

Description of Quobyte Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
quobyte_api_ca = None (String) The X.509 CA file to verify the server cert.
quobyte_api_password = quobyte (String) Password for Quobyte API server
quobyte_api_url = None (String) URL of the Quobyte API server (http or https)
quobyte_api_username = admin (String) Username for Quobyte API server.
quobyte_default_volume_group = root (String) Default owning group for new volumes.
quobyte_default_volume_user = root (String) Default owning user for new volumes.
quobyte_delete_shares = False (Boolean) Actually deletes shares (vs. unexport)
quobyte_volume_configuration = BASE (String) Name of volume configuration used for new shares.
Configuration

To configure Quobyte access for the Shared File System service, a back end configuration section has to be added in the manila.conf file. Add the name of the configuration section to enabled_share_backends in the manila.conf file. For example, if the section is named Quobyte:

enabled_share_backends = Quobyte

Create the new back end configuration section, in this case named Quobyte:

[Quobyte]

share_driver = manila.share.drivers.quobyte.quobyte.QuobyteShareDriver
share_backend_name = QUOBYTE
quobyte_api_url = http://api.myserver.com:1234/
quobyte_delete_shares = False
quobyte_volume_configuration = BASE
quobyte_default_volume_user = myuser
quobyte_default_volume_group = mygroup

The section name must match the name used in the enabled_share_backends option described above. The share_driver setting is required as shown, the other options should be set according to your local Quobyte setup.

Other security-related options are:

quobyte_api_ca = /path/to/API/server/verification/certificate
quobyte_api_username = api_user
quobyte_api_password = api_user_pwd

Quobyte support can be found at the Quobyte support webpage.

To use different share drivers for the Shared File Systems service, use the parameters described in these sections.

The Shared File Systems service can handle multiple drivers at once. The configuration for all of them follows a common paradigm:

  1. In the configuration file manila.conf, configure the option enabled_backends with the list of names for your configuration.

    For example, if you want to enable two drivers and name them Driver1 and Driver2:

    [Default]
    ...
    enabled_backends = Driver1 Driver2
    
  2. Configure a separate section for each driver using these names. You need to define in each section at least the option share_driver and assign it the value of your driver. In this example it is the generic driver:

    [Driver1]
    share_driver = manila.share.drivers.generic.GenericShareDriver
    ...
    
    [Driver2]
    share_driver = manila.share.drivers.generic.GenericShareDriver
    ...
    

The share drivers are included in the Shared File Systems repository.

Log files used by Shared File Systems

The corresponding log file of each Shared File Systems service is stored in the /var/log/manila/ directory of the host on which each service runs.

Log files used by Shared File Systems services
Log file Service/interface (for CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise) Service/interface (for Ubuntu and Debian)
api.log openstack-manila-api manila-api
manila-manage.log manila-manage manila-manage
scheduler.log openstack-manila-scheduler manila-scheduler
share.log openstack-manila-share manila-share
data.log openstack-manila-data manila-data

Additional options

These options can also be set in the manila.conf file.

Description of Certificate Authority configuration options
Configuration option = Default value Description
[DEFAULT]  
ssl_ca_file = None (String) CA certificate file to use to verify connecting clients.
ssl_cert_file = None (String) Certificate file to use when starting the server securely.
ssl_key_file = None (String) Private key file to use when starting the server securely.
Description of Common configuration options
Configuration option = Default value Description
[DEFAULT]  
check_hash = False (Boolean) Chooses whether hash of each file should be checked on data copying.
client_socket_timeout = 900 (Integer) Timeout for client connections socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ means wait forever.
compute_api_class = manila.compute.nova.API (String) The full class name of the Compute API class to use.
data_access_wait_access_rules_timeout = 180 (Integer) Time to wait for access rules to be allowed/denied on backends when migrating a share (seconds).
data_manager = manila.data.manager.DataManager (String) Full class name for the data manager.
data_node_access_admin_user = None (String) The admin user name registered in the security service in order to allow access to user authentication-based shares.
data_node_access_cert = None (String) The certificate installed in the data node in order to allow access to certificate authentication-based shares.
data_node_access_ip = None (String) The IP of the node interface connected to the admin network. Used for allowing access to the mounting shares.
data_node_mount_options = {} (Dict) Mount options to be included in the mount command for share protocols. Use dictionary format, example: {‘nfs’: ‘-o nfsvers=3’, ‘cifs’: ‘-o user=foo,pass=bar’}
data_topic = manila-data (String) The topic data nodes listen on.
enable_new_services = True (Boolean) Services to be added to the available pool on create.
fatal_exception_format_errors = False (Boolean) Whether to make exception message format errors fatal.
filter_function = None (String) String representation for an equation that will be used to filter hosts.
host = <your_hostname> (String) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address.
max_over_subscription_ratio = 20.0 (Floating point) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. A ratio lower than 1.0 is invalid.
memcached_servers = None (List) Memcached servers or None for in process cache.
monkey_patch = False (Boolean) Whether to log monkey patching.
monkey_patch_modules = (List) List of modules or decorators to monkey patch.
mount_tmp_location = /tmp/ (String) Temporary path to create and mount shares during migration.
my_ip = <your_ip> (String) IP address of this host.
num_shell_tries = 3 (Integer) Number of times to attempt to run flakey shell commands.
periodic_fuzzy_delay = 60 (Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
periodic_hooks_interval = 300.0 (Floating point) Interval in seconds between execution of periodic hooks. Used when option ‘enable_periodic_hooks’ is set to True. Default is 300.
periodic_interval = 60 (Integer) Seconds between running periodic tasks.
replica_state_update_interval = 300 (Integer) This value, specified in seconds, determines how often the share manager will poll for the health (replica_state) of each replica instance.
replication_domain = None (String) A string specifying the replication domain that the backend belongs to. This option needs to be specified the same in the configuration sections of all backends that support replication between each other. If this option is not specified in the group, it means that replication is not enabled on the backend.
report_interval = 10 (Integer) Seconds between nodes reporting state to datastore.
reserved_share_percentage = 0 (Integer) The percentage of backend capacity reserved.
rootwrap_config = None (String) Path to the rootwrap configuration file to use for running commands as root.
service_down_time = 60 (Integer) Maximum time since last check-in for up service.
smb_template_config_path = $state_path/smb.conf (String) Path to smb config.
sql_idle_timeout = 3600 (Integer) Timeout before idle SQL connections are reaped.
sql_max_retries = 10 (Integer) Maximum database connection retries during startup. (setting -1 implies an infinite retry count).
sql_retry_interval = 10 (Integer) Interval between retries of opening a SQL connection.
sqlite_clean_db = clean.sqlite (String) File name of clean sqlite database.
sqlite_db = manila.sqlite (String) The filename to use with sqlite.
sqlite_synchronous = True (Boolean) If passed, use synchronous mode for sqlite.
state_path = /var/lib/manila (String) Top-level directory for maintaining manila’s state.
storage_availability_zone = nova (String) Availability zone of this node.
tcp_keepalive = True (Boolean) Sets the value of TCP_KEEPALIVE (True/False) for each server socket.
tcp_keepalive_count = None (Integer) Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
tcp_keepalive_interval = None (Integer) Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X.
tcp_keepidle = 600 (Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
until_refresh = 0 (Integer) Count of reservations until usage is refreshed.
use_forwarded_for = False (Boolean) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy.
wsgi_keep_alive = True (Boolean) If False, closes the client socket connection explicitly. Setting it to True to maintain backward compatibility. Recommended setting is set it to False.
Description of Compute configuration options
Configuration option = Default value Description
[DEFAULT]  
nova_admin_auth_url = http://localhost:5000/v2.0 (String) DEPRECATED: Identity service URL. This option isn’t used any longer. Please use [nova] url instead.
nova_admin_password = None (String) DEPRECATED: Nova admin password. This option isn’t used any longer. Please use [nova] password instead.
nova_admin_tenant_name = service (String) DEPRECATED: Nova admin tenant name. This option isn’t used any longer. Please use [nova] tenant instead.
nova_admin_username = nova (String) DEPRECATED: Nova admin username. This option isn’t used any longer. Please use [nova] username instead.
nova_catalog_admin_info = compute:nova:adminURL (String) DEPRECATED: Same as nova_catalog_info, but for admin endpoint. This option isn’t used any longer.
nova_catalog_info = compute:nova:publicURL (String) DEPRECATED: Info to match when looking for nova in the service catalog. Format is separated values of the form: <service_type>:<service_name>:<endpoint_type> This option isn’t used any longer.
os_region_name = None (String) Region name of this node.
Description of Ganesha configuration options
Configuration option = Default value Description
[DEFAULT]  
ganesha_config_dir = /etc/ganesha (String) Directory where Ganesha config files are stored.
ganesha_config_path = $ganesha_config_dir/ganesha.conf (String) Path to main Ganesha config file.
ganesha_db_path = $state_path/manila-ganesha.db (String) Location of Ganesha database file. (Ganesha module only.)
ganesha_export_dir = $ganesha_config_dir/export.d (String) Path to directory containing Ganesha export configuration. (Ganesha module only.)
ganesha_export_template_dir = /etc/manila/ganesha-export-templ.d (String) Path to directory containing Ganesha export block templates. (Ganesha module only.)
ganesha_nfs_export_options = maxread = 65536, prefread = 65536 (String) Options to use when exporting a share using ganesha NFS server. Note that these defaults can be overridden when a share is created by passing metadata with key name export_options. Also note the complete set of default ganesha export options is specified in ganesha_utils. (GPFS only.)
ganesha_service_name = ganesha.nfsd (String) Name of the ganesha nfs service.
Description of hnas configuration options
Configuration option = Default value Description
[DEFAULT]  
hds_hnas_driver_helper = manila.share.drivers.hitachi.ssh.HNASSSHBackend (String) Python class to be used for driver helper.
Description of Quota configuration options
Configuration option = Default value Description
[DEFAULT]  
max_age = 0 (Integer) Number of seconds between subsequent usage refreshes.
max_gigabytes = 10000 (Integer) Maximum number of volume gigabytes to allow per host.
quota_driver = manila.quota.DbQuotaDriver (String) Default driver to use for quota checks.
quota_gigabytes = 1000 (Integer) Number of share gigabytes allowed per project.
quota_share_networks = 10 (Integer) Number of share-networks allowed per project.
quota_shares = 50 (Integer) Number of shares allowed per project.
quota_snapshot_gigabytes = 1000 (Integer) Number of snapshot gigabytes allowed per project.
quota_snapshots = 50 (Integer) Number of share snapshots allowed per project.
reservation_expire = 86400 (Integer) Number of seconds until a reservation expires.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of SAN configuration options
Configuration option = Default value Description
[DEFAULT]  
ssh_conn_timeout = 60 (Integer) Backend server SSH connection timeout.
ssh_max_pool_conn = 10 (Integer) Maximum number of connections in the SSH pool.
ssh_min_pool_conn = 1 (Integer) Minimum number of connections in the SSH pool.
Description of Scheduler configuration options
Configuration option = Default value Description
[DEFAULT]  
capacity_weight_multiplier = 1.0 (Floating point) Multiplier used for weighing share capacity. Negative numbers mean to stack vs spread.
pool_weight_multiplier = 1.0 (Floating point) Multiplier used for weighing pools which have existing share servers. Negative numbers mean to spread vs stack.
scheduler_default_filters = AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter, ConsistencyGroupFilter, DriverFilter, ShareReplicationFilter (List) Which filter class names to use for filtering hosts when not specified in the request.
scheduler_default_weighers = CapacityWeigher, GoodnessWeigher (List) Which weigher class names to use for weighing hosts.
scheduler_driver = manila.scheduler.drivers.filter.FilterScheduler (String) Default scheduler driver to use.
scheduler_host_manager = manila.scheduler.host_manager.HostManager (String) The scheduler host manager class to use.
scheduler_json_config_location = (String) Absolute path to scheduler configuration JSON file.
scheduler_manager = manila.scheduler.manager.SchedulerManager (String) Full class name for the scheduler manager.
scheduler_max_attempts = 3 (Integer) Maximum number of attempts to schedule a share.
scheduler_topic = manila-scheduler (String) The topic scheduler nodes listen on.
Description of Share configuration options
Configuration option = Default value Description
[DEFAULT]  
automatic_share_server_cleanup = True (Boolean) If set to True, then Manila will delete all share servers which were unused more than specified time .If set to False - automatic deletion of share servers will be disabled.
backlog = 4096 (Integer) Number of backlog requests to configure the socket with.
default_share_type = None (String) Default share type to use.
delete_share_server_with_last_share = False (Boolean) Whether share servers will be deleted on deletion of the last share.
driver_handles_share_servers = None (Boolean) There are two possible approaches for share drivers in Manila. First is when share driver is able to handle share-servers and second when not. Drivers can support either both or only one of these approaches. So, set this opt to True if share driver is able to handle share servers and it is desired mode else set False. It is set to None by default to make this choice intentional.
enable_periodic_hooks = False (Boolean) Whether to enable periodic hooks or not.
enable_post_hooks = False (Boolean) Whether to enable post hooks or not.
enable_pre_hooks = False (Boolean) Whether to enable pre hooks or not.
enabled_share_backends = None (List) A list of share backend names to use. These backend names should be backed by a unique [CONFIG] group with its options.
enabled_share_protocols = NFS, CIFS (List) Specify list of protocols to be allowed for share creation. Available values are ‘(‘NFS’, ‘CIFS’, ‘GLUSTERFS’, ‘HDFS’, ‘CEPHFS’)’
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
hook_drivers = (List) Driver(s) to perform some additional actions before and after share driver actions and on a periodic basis. Default is [].
migration_create_delete_share_timeout = 300 (Integer) Timeout for creating and deleting share instances when performing share migration (seconds).
migration_driver_continue_update_interval = 60 (Integer) This value, specified in seconds, determines how often the share manager will poll the driver to perform the next step of migration in the storage backend, for a migrating share.
migration_ignore_files = lost+found (List) List of files and folders to be ignored when migrating shares. Items should be names (not including any path).
migration_readonly_rules_support = True (Boolean) Specify whether read only access rule mode is supported in this backend.
migration_wait_access_rules_timeout = 180 (Integer) Time to wait for access rules to be allowed/denied on backends when migrating shares using generic approach (seconds).
network_config_group = None (String) Name of the configuration group in the Manila conf file to look for network config options.If not set, the share backend’s config group will be used.If an option is not found within provided group, then’DEFAULT’ group will be used for search of option.
root_helper = sudo (String) Deprecated: command to use for running commands as root.
share_manager = manila.share.manager.ShareManager (String) Full class name for the share manager.
share_name_template = share-%s (String) Template string to be used to generate share names.
share_snapshot_name_template = share-snapshot-%s (String) Template string to be used to generate share snapshot names.
share_topic = manila-share (String) The topic share nodes listen on.
share_usage_audit_period = month (String) Time period to generate share usages for. Time period must be hour, day, month or year.
suppress_post_hooks_errors = False (Boolean) Whether to suppress post hook errors (allow driver’s results to pass through) or not.
suppress_pre_hooks_errors = False (Boolean) Whether to suppress pre hook errors (allow driver perform actions) or not.
unmanage_remove_access_rules = False (Boolean) If set to True, then manila will deny access and remove all access rules on share unmanage.If set to False - nothing will be changed.
unused_share_server_cleanup_interval = 10 (Integer) Unallocated share servers reclamation time interval (minutes). Minimum value is 10 minutes, maximum is 60 minutes. The reclamation function is run every 10 minutes and delete share servers which were unused more than unused_share_server_cleanup_interval option defines. This value reflects the shortest time Manila will wait for a share server to go unutilized before deleting it.
use_scheduler_creating_share_from_snapshot = False (Boolean) If set to False, then share creation from snapshot will be performed on the same host. If set to True, then scheduling step will be used.
Description of Tegile Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
tegile_default_project = None (String) Create shares in this project
tegile_nas_login = None (String) User name for the Tegile NAS server.
tegile_nas_password = None (String) Password for the Tegile NAS server.
tegile_nas_server = None (String) Tegile NAS server hostname or IP address.
Description of WinRM configuration options
Configuration option = Default value Description
[DEFAULT]  
winrm_cert_key_pem_path = ~/.ssl/key.pem (String) Path to the x509 certificate key.
winrm_cert_pem_path = ~/.ssl/cert.pem (String) Path to the x509 certificate used for accessing the serviceinstance.
winrm_conn_timeout = 60 (Integer) WinRM connection timeout.
winrm_operation_timeout = 60 (Integer) WinRM operation timeout.
winrm_retry_count = 3 (Integer) WinRM retry count.
winrm_retry_interval = 5 (Integer) WinRM retry interval in seconds
winrm_use_cert_based_auth = False (Boolean) Use x509 certificates in order to authenticate to theservice instance.
Description of ZFSSA Share Driver configuration options
Configuration option = Default value Description
[DEFAULT]  
zfssa_auth_password = None (String) ZFSSA management authorized userpassword.
zfssa_auth_user = None (String) ZFSSA management authorized username.
zfssa_data_ip = None (String) IP address for data.
zfssa_host = None (String) ZFSSA management IP address.
zfssa_manage_policy = loose (String) Driver policy for share manage. A strict policy checks for a schema named manila_managed, and makes sure its value is true. A loose policy does not check for the schema.
zfssa_nas_checksum = fletcher4 (String) Controls checksum used for data blocks.
zfssa_nas_compression = off (String) Data compression-off, lzjb, gzip-2, gzip, gzip-9.
zfssa_nas_logbias = latency (String) Controls behavior when servicing synchronous writes.
zfssa_nas_mountpoint = (String) Location of project in ZFS/SA.
zfssa_nas_quota_snap = true (String) Controls whether a share quota includes snapshot.
zfssa_nas_rstchown = true (String) Controls whether file ownership can be changed.
zfssa_nas_vscan = false (String) Controls whether the share is scanned for viruses.
zfssa_pool = None (String) ZFSSA storage pool name.
zfssa_project = None (String) ZFSSA project name.
zfssa_rest_timeout = None (String) REST connection timeout (in seconds).

Shared File Systems service sample configuration files

All the files in this section can be found in /etc/manila.

manila.conf

The manila.conf file is installed in /etc/manila by default. When you manually install the Shared File Systems service, the options in the manila.conf file are set to default values.

The manila.conf file contains most of the options needed to configure the Shared File Systems service.

[DEFAULT]

#
# From manila
#

# The maximum number of items returned in a single response from a
# collection resource. (integer value)
#osapi_max_limit = 1000

# Base URL to be presented to users in links to the Share API (string
# value)
#osapi_share_base_URL = <None>

# Treat X-Forwarded-For as the canonical remote address. Only enable
# this if you have a sanitizing proxy. (boolean value)
#use_forwarded_for = false

# File name for the paste.deploy config for manila-api. (string value)
#api_paste_config = api-paste.ini

# Top-level directory for maintaining manila's state. (string value)
#state_path = /var/lib/manila

# Region name of this node. (string value)
#os_region_name = <None>

# IP address of this host. (string value)
#my_ip = <your_ip>

# The topic scheduler nodes listen on. (string value)
#scheduler_topic = manila-scheduler

# The topic share nodes listen on. (string value)
#share_topic = manila-share

# The topic data nodes listen on. (string value)
#data_topic = manila-data

# Whether to rate limit the API. (boolean value)
#api_rate_limit = true

# Specify list of extensions to load when using osapi_share_extension
# option with manila.api.contrib.select_extensions. (list value)
#osapi_share_ext_list =

# The osapi share extensions to load. (list value)
#osapi_share_extension = manila.api.contrib.standard_extensions

# The filename to use with sqlite. (string value)
#sqlite_db = manila.sqlite

# If passed, use synchronous mode for sqlite. (boolean value)
#sqlite_synchronous = true

# Timeout before idle SQL connections are reaped. (integer value)
#sql_idle_timeout = 3600

# Maximum database connection retries during startup. (setting -1
# implies an infinite retry count). (integer value)
#sql_max_retries = 10

# Interval between retries of opening a SQL connection. (integer
# value)
#sql_retry_interval = 10

# Full class name for the scheduler manager. (string value)
#scheduler_manager = manila.scheduler.manager.SchedulerManager

# Full class name for the share manager. (string value)
#share_manager = manila.share.manager.ShareManager

# Full class name for the data manager. (string value)
#data_manager = manila.data.manager.DataManager

# Name of this node.  This can be an opaque identifier.  It is not
# necessarily a hostname, FQDN, or IP address. (string value)
#host = <your_hostname>

# Availability zone of this node. (string value)
#storage_availability_zone = nova

# Default share type to use. (string value)
#default_share_type = <None>

# Memcached servers or None for in process cache. (list value)
#memcached_servers = <None>

# Time period to generate share usages for.  Time period must be hour,
# day, month or year. (string value)
#share_usage_audit_period = month

# Deprecated: command to use for running commands as root. (string
# value)
#root_helper = sudo

# Path to the rootwrap configuration file to use for running commands
# as root. (string value)
#rootwrap_config = <None>

# Whether to log monkey patching. (boolean value)
#monkey_patch = false

# List of modules or decorators to monkey patch. (list value)
#monkey_patch_modules =

# Maximum time since last check-in for up service. (integer value)
#service_down_time = 60

# The full class name of the share API class to use. (string value)
#share_api_class = manila.share.api.API

# The strategy to use for auth. Supports noauth, keystone, and
# deprecated. (string value)
#auth_strategy = keystone

# A list of share backend names to use. These backend names should be
# backed by a unique [CONFIG] group with its options. (list value)
#enabled_share_backends = <None>

# Specify list of protocols to be allowed for share creation.
# Available values are '('NFS', 'CIFS', 'GLUSTERFS', 'HDFS',
# 'CEPHFS')' (list value)
#enabled_share_protocols = NFS,CIFS

# The full class name of the Compute API class to use. (string value)
#compute_api_class = manila.compute.nova.API

# The backend to use for database. (string value)
#db_backend = sqlalchemy

# Services to be added to the available pool on create. (boolean
# value)
#enable_new_services = true

# Template string to be used to generate share names. (string value)
#share_name_template = share-%s

# Template string to be used to generate share snapshot names. (string
# value)
#share_snapshot_name_template = share-snapshot-%s

# Driver to use for database access. (string value)
#db_driver = manila.db

# Whether to make exception message format errors fatal. (boolean
# value)
#fatal_exception_format_errors = false

# Name of Open vSwitch bridge to use. (string value)
#ovs_integration_bridge = br-int

# The full class name of the Networking API class to use. (string
# value)
# Deprecated group/name - [DEFAULT]/network_api_class
#network_api_class = manila.network.neutron.neutron_network_plugin.NeutronNetworkPlugin

# vNIC type used for binding. (string value)
# Allowed values: baremetal, normal, direct, direct-physical, macvtap
#neutron_vnic_type = baremetal

# Host ID to be used when creating neutron port. If not set host is
# set to manila-share host by default. (string value)
#neutron_host_id = openstack-VirtualBox

# Default Neutron network that will be used for share server creation.
# This opt is used only with class 'NeutronSingleNetworkPlugin'.
# (string value)
# Deprecated group/name - [DEFAULT]/neutron_net_id
#neutron_net_id = <None>

# Default Neutron subnet that will be used for share server creation.
# Should be assigned to network defined in opt 'neutron_net_id'. This
# opt is used only with class 'NeutronSingleNetworkPlugin'. (string
# value)
# Deprecated group/name - [DEFAULT]/neutron_subnet_id
#neutron_subnet_id = <None>

# Default Nova network that will be used for share servers. This opt
# is used only with class 'NovaSingleNetworkPlugin'. (string value)
# Deprecated group/name - [DEFAULT]/nova_single_network_plugin_net_id
#nova_single_network_plugin_net_id = <None>

# Gateway IPv4 address that should be used. Required. (string value)
# Deprecated group/name - [DEFAULT]/standalone_network_plugin_gateway
#standalone_network_plugin_gateway = <None>

# Network mask that will be used. Can be either decimal like '24' or
# binary like '255.255.255.0'. Required. (string value)
# Deprecated group/name - [DEFAULT]/standalone_network_plugin_mask
#standalone_network_plugin_mask = <None>

# Network type, such as 'flat', 'vlan', 'vxlan' or 'gre'. Empty value
# is alias for 'flat'. It will be assigned to share-network and share
# drivers will be able to use this for network interfaces within
# provisioned share servers. Optional. (string value)
# Allowed values: flat, vlan, vxlan, gre
# Deprecated group/name - [DEFAULT]/standalone_network_plugin_network_type
#standalone_network_plugin_network_type = <None>

# Set it if network has segmentation (VLAN, VXLAN, etc...). It will be
# assigned to share-network and share drivers will be able to use this
# for network interfaces within provisioned share servers. Optional.
# Example: 1001 (integer value)
# Deprecated group/name - [DEFAULT]/standalone_network_plugin_segmentation_id
#standalone_network_plugin_segmentation_id = <None>

# Can be IP address, range of IP addresses or list of addresses or
# ranges. Contains addresses from IP network that are allowed to be
# used. If empty, then will be assumed that all host addresses from
# network can be used. Optional. Examples: 10.0.0.10 or
# 10.0.0.10-10.0.0.20 or
# 10.0.0.10-10.0.0.20,10.0.0.30-10.0.0.40,10.0.0.50 (list value)
# Deprecated group/name - [DEFAULT]/standalone_network_plugin_allowed_ip_ranges
#standalone_network_plugin_allowed_ip_ranges = <None>

# IP version of network. Optional.Allowed values are '4' and '6'.
# Default value is '4'. (integer value)
# Deprecated group/name - [DEFAULT]/standalone_network_plugin_ip_version
#standalone_network_plugin_ip_version = 4

# Maximum Transmission Unit (MTU) value of the network. Default value
# is 1500. (integer value)
# Deprecated group/name - [DEFAULT]/standalone_network_plugin_mtu
#standalone_network_plugin_mtu = 1500

# Number of shares allowed per project. (integer value)
#quota_shares = 50

# Number of share snapshots allowed per project. (integer value)
#quota_snapshots = 50

# Number of share gigabytes allowed per project. (integer value)
#quota_gigabytes = 1000

# Number of snapshot gigabytes allowed per project. (integer value)
#quota_snapshot_gigabytes = 1000

# Number of share-networks allowed per project. (integer value)
#quota_share_networks = 10

# Number of seconds until a reservation expires. (integer value)
#reservation_expire = 86400

# Count of reservations until usage is refreshed. (integer value)
#until_refresh = 0

# Number of seconds between subsequent usage refreshes. (integer
# value)
#max_age = 0

# Default driver to use for quota checks. (string value)
#quota_driver = manila.quota.DbQuotaDriver

# The scheduler host manager class to use. (string value)
#scheduler_host_manager = manila.scheduler.host_manager.HostManager

# Maximum number of attempts to schedule a share. (integer value)
#scheduler_max_attempts = 3

# Which filter class names to use for filtering hosts when not
# specified in the request. (list value)
#scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter,ConsistencyGroupFilter,DriverFilter,ShareReplicationFilter

# Which weigher class names to use for weighing hosts. (list value)
#scheduler_default_weighers = CapacityWeigher,GoodnessWeigher

# Default scheduler driver to use. (string value)
#scheduler_driver = manila.scheduler.drivers.filter.FilterScheduler

# Absolute path to scheduler configuration JSON file. (string value)
#scheduler_json_config_location =

# Maximum number of volume gigabytes to allow per host. (integer
# value)
#max_gigabytes = 10000

# Multiplier used for weighing share capacity. Negative numbers mean
# to stack vs spread. (floating point value)
#capacity_weight_multiplier = 1.0

# Multiplier used for weighing pools which have existing share
# servers. Negative numbers mean to spread vs stack. (floating point
# value)
#pool_weight_multiplier = 1.0

# Seconds between nodes reporting state to datastore. (integer value)
#report_interval = 10

# Seconds between running periodic tasks. (integer value)
#periodic_interval = 60

# Range of seconds to randomly delay when starting the periodic task
# scheduler to reduce stampeding. (Disable by setting to 0) (integer
# value)
#periodic_fuzzy_delay = 60

# IP address for OpenStack Share API to listen on. (string value)
#osapi_share_listen = ::

# Port for OpenStack Share API to listen on. (port value)
# Minimum value: 0
# Maximum value: 65535
#osapi_share_listen_port = 8786

# Number of workers for OpenStack Share API service. (integer value)
#osapi_share_workers = 1

# If set to False, then share creation from snapshot will be performed
# on the same host. If set to True, then scheduling step will be used.
# (boolean value)
#use_scheduler_creating_share_from_snapshot = false

# Directory where Ganesha config files are stored. (string value)
#ganesha_config_dir = /etc/ganesha

# Path to main Ganesha config file. (string value)
#ganesha_config_path = $ganesha_config_dir/ganesha.conf

# Options to use when exporting a share using ganesha NFS server. Note
# that these defaults can be overridden when a share is created by
# passing metadata with key name export_options.  Also note the
# complete set of default ganesha export options is specified in
# ganesha_utils. (GPFS only.) (string value)
#ganesha_nfs_export_options = maxread = 65536, prefread = 65536

# Name of the ganesha nfs service. (string value)
#ganesha_service_name = ganesha.nfsd

# Location of Ganesha database file. (Ganesha module only.) (string
# value)
#ganesha_db_path = $state_path/manila-ganesha.db

# Path to directory containing Ganesha export configuration. (Ganesha
# module only.) (string value)
#ganesha_export_dir = $ganesha_config_dir/export.d

# Path to directory containing Ganesha export block templates.
# (Ganesha module only.) (string value)
#ganesha_export_template_dir = /etc/manila/ganesha-export-templ.d

# Number of times to attempt to run flakey shell commands. (integer
# value)
#num_shell_tries = 3

# The percentage of backend capacity reserved. (integer value)
#reserved_share_percentage = 0

# The backend name for a given driver implementation. (string value)
#share_backend_name = <None>

# Name of the configuration group in the Manila conf file to look for
# network config options.If not set, the share backend's config group
# will be used.If an option is not found within provided group,
# then'DEFAULT' group will be used for search of option. (string
# value)
#network_config_group = <None>

# There are two possible approaches for share drivers in Manila. First
# is when share driver is able to handle share-servers and second when
# not. Drivers can support either both or only one of these
# approaches. So, set this opt to True if share driver is able to
# handle share servers and it is desired mode else set False. It is
# set to None by default to make this choice intentional. (boolean
# value)
#driver_handles_share_servers = <None>

# Float representation of the over subscription ratio when thin
# provisioning is involved. Default ratio is 20.0, meaning provisioned
# capacity can be 20 times the total physical capacity. If the ratio
# is 10.5, it means provisioned capacity can be 10.5 times the total
# physical capacity. A ratio of 1.0 means provisioned capacity cannot
# exceed the total physical capacity. A ratio lower than 1.0 is
# invalid. (floating point value)
#max_over_subscription_ratio = 20.0

# List of files and folders to be ignored when migrating shares. Items
# should be names (not including any path). (list value)
#migration_ignore_files = lost+found

# The template for mounting shares for this backend. Must specify the
# executable with all necessary parameters for the protocol supported.
# 'proto' template element may not be required if included in the
# command. 'export' and 'path' template elements are required. It is
# advisable to separate different commands per backend. (string value)
#share_mount_template = mount -vt %(proto)s %(options)s %(export)s %(path)s

# The template for unmounting shares for this backend. Must specify
# the executable with all necessary parameters for the protocol
# supported. 'path' template element is required. It is advisable to
# separate different commands per backend. (string value)
#share_unmount_template = umount -v %(path)s

# Protocol access mapping for this backend. Should be a dictionary
# comprised of {'access_type1': ['share_proto1', 'share_proto2'],
# 'access_type2': ['share_proto2', 'share_proto3']}. (dict value)
#protocol_access_mapping = ip:['nfs'],user:['cifs']

# Specify whether read only access rule mode is supported in this
# backend. (boolean value)
# Deprecated group/name - [DEFAULT]/migration_readonly_support
#migration_readonly_rules_support = true

# If share driver requires to setup admin network for share, then
# define network plugin config options in some separate config group
# and set its name here. Used only with another option
# 'driver_handles_share_servers' set to 'True'. (string value)
#admin_network_config_group = <None>

# A string specifying the replication domain that the backend belongs
# to. This option needs to be specified the same in the configuration
# sections of all backends that support replication between each
# other. If this option is not specified in the group, it means that
# replication is not enabled on the backend. (string value)
#replication_domain = <None>

# String representation for an equation that will be used to filter
# hosts. (string value)
#filter_function = <None>

# String representation for an equation that will be used to determine
# the goodness of a host. (string value)
#goodness_function = <None>

# Backend server SSH connection timeout. (integer value)
#ssh_conn_timeout = 60

# Minimum number of connections in the SSH pool. (integer value)
#ssh_min_pool_conn = 1

# Maximum number of connections in the SSH pool. (integer value)
#ssh_max_pool_conn = 10

# The full class name of the Private Data Driver class to use. (string
# value)
#drivers_private_storage_class = manila.share.drivers_private_data.SqlStorageDriver

# Fully qualified path to the ceph.conf file. (string value)
#cephfs_conf_path =

# The name of the cluster in use, if it is not the default ('ceph').
# (string value)
#cephfs_cluster_name = <None>

# The name of the ceph auth identity to use. (string value)
#cephfs_auth_id = manila

# Whether to enable snapshots in this driver. (boolean value)
#cephfs_enable_snapshots = false

# Linux bridge used by container hypervisor to plug host-side veth to.
# It will be unplugged from here by the driver. (string value)
#container_linux_bridge_name = docker0

# OVS bridge to use to plug a container to. (string value)
#container_ovs_bridge_name = br-int

# Determines whether to allow guest access to CIFS share or not.
# (boolean value)
#container_cifs_guest_ok = true

# Image to be used for a container-based share server. (string value)
#container_image_name = manila-docker-container

# Container helper which provides container-related operations to the
# driver. (string value)
#container_helper = manila.share.drivers.container.container_helper.DockerExecHelper

# Helper which facilitates interaction with share server. (string
# value)
#container_protocol_helper = manila.share.drivers.container.protocol_helper.DockerCIFSHelper

# Helper which facilitates interaction with storage solution used to
# actually store data. By default LVM is used to provide storage for a
# share. (string value)
#container_storage_helper = manila.share.drivers.container.storage_helper.LVMHelper

# LVM volume group to use for volumes. This volume group must be
# created by the cloud administrator independently from manila
# operations. (string value)
#container_volume_group = manila_docker_volumes

# User name for the EMC server. (string value)
#emc_nas_login = <None>

# Password for the EMC server. (string value)
#emc_nas_password = <None>

# EMC server hostname or IP address. (string value)
#emc_nas_server = <None>

# Port number for the EMC server. (port value)
# Minimum value: 0
# Maximum value: 65535
#emc_nas_server_port = 8080

# Use secure connection to server. (boolean value)
#emc_nas_server_secure = true

# Share backend. (string value)
# Allowed values: isilon, vnx, unity
#emc_share_backend = <None>

# Container of share servers. (string value)
#emc_nas_server_container = <None>

# EMC pool names. (list value)
# Deprecated group/name - [DEFAULT]/emc_nas_pool_name
#emc_nas_pool_names = <None>

# The root directory where shares will be located. (string value)
#emc_nas_root_dir = <None>

# Pool to persist the meta-data of NAS server. (string value)
#emc_nas_server_pool = <None>

# Comma separated list specifying the ports that can be used for share
# server interfaces. Members of the list can be Unix-style glob
# expressions. (list value)
#emc_interface_ports = <None>

# Path to smb config. (string value)
#smb_template_config_path = $state_path/smb.conf

# Volume name template. (string value)
#volume_name_template = manila-share-%s

# Volume snapshot name template. (string value)
#volume_snapshot_name_template = manila-snapshot-%s

# Parent path in service instance where shares will be mounted.
# (string value)
#share_mount_path = /shares

# Maximum time to wait for creating cinder volume. (integer value)
#max_time_to_create_volume = 180

# Maximum time to wait for extending cinder volume. (integer value)
#max_time_to_extend_volume = 180

# Maximum time to wait for attaching cinder volume. (integer value)
#max_time_to_attach = 120

# Path to SMB config in service instance. (string value)
#service_instance_smb_config_path = $share_mount_path/smb.conf

# Specify list of share export helpers. (list value)
#share_helpers = CIFS=manila.share.drivers.helpers.CIFSHelperIPAccess,NFS=manila.share.drivers.helpers.NFSHelper

# Filesystem type of the share volume. (string value)
# Allowed values: ext4, ext3
#share_volume_fstype = ext4

# Name or id of cinder volume type which will be used for all volumes
# created by driver. (string value)
#cinder_volume_type = <None>

# Remote GlusterFS server node's login password. This is not required
# if 'glusterfs_path_to_private_key' is configured. (string value)
# Deprecated group/name - [DEFAULT]/glusterfs_native_server_password
#glusterfs_server_password = <None>

# Path of Manila host's private SSH key file. (string value)
# Deprecated group/name - [DEFAULT]/glusterfs_native_path_to_private_key
#glusterfs_path_to_private_key = <None>

# Type of NFS server that mediate access to the Gluster volumes
# (Gluster or Ganesha). (string value)
#glusterfs_nfs_server_type = Gluster

# Remote Ganesha server node's IP address. (string value)
#glusterfs_ganesha_server_ip = <None>

# Remote Ganesha server node's username. (string value)
#glusterfs_ganesha_server_username = root

# Remote Ganesha server node's login password. This is not required if
# 'glusterfs_path_to_private_key' is configured. (string value)
#glusterfs_ganesha_server_password = <None>

# Specifies GlusterFS share layout, that is, the method of associating
# backing GlusterFS resources to shares. (string value)
#glusterfs_share_layout = <None>

# Specifies the GlusterFS volume to be mounted on the Manila host. It
# is of the form [remoteuser@]<volserver>:<volid>. (string value)
#glusterfs_target = <None>

# Base directory containing mount points for Gluster volumes. (string
# value)
#glusterfs_mount_point_base = $state_path/mnt

# List of GlusterFS servers that can be used to create shares. Each
# GlusterFS server should be of the form [remoteuser@]<volserver>, and
# they are assumed to belong to distinct Gluster clusters. (list
# value)
# Deprecated group/name - [DEFAULT]/glusterfs_targets
#glusterfs_servers =

# Regular expression template used to filter GlusterFS volumes for
# share creation. The regex template can optionally (ie. with support
# of the GlusterFS backend) contain the #{size} parameter which
# matches an integer (sequence of digits) in which case the value
# shall be interpreted as size of the volume in GB. Examples: "manila-
# share-volume-\d+$", "manila-share-volume-#{size}G-\d+$"; with
# matching volume names, respectively: "manila-share-volume-12",
# "manila-share-volume-3G-13". In latter example, the number that
# matches "#{size}", that is, 3, is an indication that the size of
# volume is 3G. (string value)
#glusterfs_volume_pattern = <None>

# The IP of the HDFS namenode. (string value)
#hdfs_namenode_ip = <None>

# The port of HDFS namenode service. (port value)
# Minimum value: 0
# Maximum value: 65535
#hdfs_namenode_port = 9000

# HDFS namenode SSH port. (port value)
# Minimum value: 0
# Maximum value: 65535
#hdfs_ssh_port = 22

# HDFS namenode ssh login name. (string value)
#hdfs_ssh_name = <None>

# HDFS namenode SSH login password, This parameter is not necessary,
# if 'hdfs_ssh_private_key' is configured. (string value)
#hdfs_ssh_pw = <None>

# Path to HDFS namenode SSH private key for login. (string value)
#hdfs_ssh_private_key = <None>

# HNAS management interface IP for communication between Manila
# controller and HNAS. (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_ip
#hitachi_hnas_ip = <None>

# HNAS username Base64 String in order to perform tasks such as create
# file-systems and network interfaces. (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_user
#hitachi_hnas_user = <None>

# HNAS user password. Required only if private key is not provided.
# (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_password
#hitachi_hnas_password = <None>

# Specify which EVS this backend is assigned to. (integer value)
# Deprecated group/name - [DEFAULT]/hds_hnas_evs_id
#hitachi_hnas_evs_id = <None>

# Specify IP for mounting shares. (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_evs_ip
#hitachi_hnas_evs_ip = <None>

# Specify file-system name for creating shares. (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_file_system_name
#hitachi_hnas_file_system_name = <None>

# RSA/DSA private key value used to connect into HNAS. Required only
# if password is not provided. (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_ssh_private_key
#hitachi_hnas_ssh_private_key = <None>

# The IP of the clusters admin node. Only set in HNAS multinode
# clusters. (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_cluster_admin_ip0
#hitachi_hnas_cluster_admin_ip0 = <None>

# The time (in seconds) to wait for stalled HNAS jobs before aborting.
# (integer value)
# Deprecated group/name - [DEFAULT]/hds_hnas_stalled_job_timeout
#hitachi_hnas_stalled_job_timeout = 30

# Python class to be used for driver helper. (string value)
# Deprecated group/name - [DEFAULT]/hds_hnas_driver_helper
#hitachi_hnas_driver_helper = manila.share.drivers.hitachi.hnas.ssh.HNASSSHBackend

# By default, CIFS snapshots are not allowed to be taken when the
# share has clients connected because consistent point-in-time replica
# cannot be guaranteed for all files. Enabling this might cause
# inconsistent snapshots on CIFS shares. (boolean value)
# Deprecated group/name - [DEFAULT]/hds_hnas_allow_cifs_snapshot_while_mounted
#hitachi_hnas_allow_cifs_snapshot_while_mounted = false

# HSP management host for communication between Manila controller and
# HSP. (string value)
#hitachi_hsp_host = <None>

# HSP username to perform tasks such as create filesystems and shares.
# (string value)
#hitachi_hsp_username = <None>

# HSP password for the username provided. (string value)
#hitachi_hsp_password = <None>

# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 (string
# value)
# Deprecated group/name - [DEFAULT]/hp3par_api_url
#hpe3par_api_url =

# 3PAR username with the 'edit' role (string value)
# Deprecated group/name - [DEFAULT]/hp3par_username
#hpe3par_username =

# 3PAR password for the user specified in hpe3par_username (string
# value)
# Deprecated group/name - [DEFAULT]/hp3par_password
#hpe3par_password =

# IP address of SAN controller (string value)
# Deprecated group/name - [DEFAULT]/hp3par_san_ip
#hpe3par_san_ip =

# Username for SAN controller (string value)
# Deprecated group/name - [DEFAULT]/hp3par_san_login
#hpe3par_san_login =

# Password for SAN controller (string value)
# Deprecated group/name - [DEFAULT]/hp3par_san_password
#hpe3par_san_password =

# SSH port to use with SAN (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/hp3par_san_ssh_port
#hpe3par_san_ssh_port = 22

# The File Provisioning Group (FPG) to use (FPG)
# Deprecated group/name - [DEFAULT]/hp3par_fpg
#hpe3par_fpg = <None>

# Use one filestore per share (boolean value)
# Deprecated group/name - [DEFAULT]/hp3par_fstore_per_share
#hpe3par_fstore_per_share = false

# Require IP access rules for CIFS (in addition to user) (boolean
# value)
#hpe3par_require_cifs_ip = false

# Enable HTTP debugging to 3PAR (boolean value)
# Deprecated group/name - [DEFAULT]/hp3par_debug
#hpe3par_debug = false

# File system admin user name for CIFS. (string value)
# Deprecated group/name - [DEFAULT]/hp3par_cifs_admin_access_username
#hpe3par_cifs_admin_access_username =

# File system admin password for CIFS. (string value)
# Deprecated group/name - [DEFAULT]/hp3par_cifs_admin_access_password
#hpe3par_cifs_admin_access_password =

# File system domain for the CIFS admin user. (string value)
# Deprecated group/name - [DEFAULT]/hp3par_cifs_admin_access_domain
#hpe3par_cifs_admin_access_domain = LOCAL_CLUSTER

# The path where shares will be mounted when deleting nested file
# trees. (string value)
# Deprecated group/name - [DEFAULT]/hpe3par_share_mount_path
#hpe3par_share_mount_path = /mnt/

# The configuration file for the Manila Huawei driver. (string value)
#manila_huawei_conf_file = /etc/manila/manila_huawei_conf.xml

# IP to be added to GPFS export string. (string value)
#gpfs_share_export_ip = <None>

# Base folder where exported shares are located. (string value)
#gpfs_mount_point_base = $state_path/mnt

# NFS Server type. Valid choices are "KNFS" (kernel NFS) or "CES"
# (Ganesha NFS). (string value)
#gpfs_nfs_server_type = KNFS

# A list of the fully qualified NFS server names that make up the
# OpenStack Manila configuration. (list value)
#gpfs_nfs_server_list = <None>

# True:when Manila services are running on one of the Spectrum Scale
# node. False:when Manila services are not running on any of the
# Spectrum Scale node. (boolean value)
#is_gpfs_node = false

# GPFS server SSH port. (port value)
# Minimum value: 0
# Maximum value: 65535
#gpfs_ssh_port = 22

# GPFS server SSH login name. (string value)
#gpfs_ssh_login = <None>

# GPFS server SSH login password. The password is not needed, if
# 'gpfs_ssh_private_key' is configured. (string value)
#gpfs_ssh_password = <None>

# Path to GPFS server SSH private key for login. (string value)
#gpfs_ssh_private_key = <None>

# Specify list of share export helpers. (list value)
#gpfs_share_helpers = KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper,CES=manila.share.drivers.ibm.gpfs.CESHelper

# DEPRECATED: Options to use when exporting a share using kernel NFS
# server. Note that these defaults can be overridden when a share is
# created by passing metadata with key name export_options. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option isn't used any longer. Please use share-type
# extra specs for export options.
#knfs_export_options = rw,sync,no_root_squash,insecure,no_wdelay,no_subtree_check

# Base folder where exported shares are located. (string value)
#lvm_share_export_root = $state_path/mnt

# IP to be added to export string. (string value)
#lvm_share_export_ip = <None>

# If set, create LVMs with multiple mirrors. Note that this requires
# lvm_mirrors + 2 PVs with available space. (integer value)
#lvm_share_mirrors = 0

# Name for the VG that will contain exported shares. (string value)
#lvm_share_volume_group = lvm-shares

# Specify list of share export helpers. (list value)
#lvm_share_helpers = CIFS=manila.share.drivers.helpers.CIFSHelperUserAccess,NFS=manila.share.drivers.helpers.NFSHelper

# The storage family type used on the storage system; valid values
# include ontap_cluster for using clustered Data ONTAP. (string value)
#netapp_storage_family = ontap_cluster

# The hostname (or IP address) for the storage system. (string value)
# Deprecated group/name - [DEFAULT]/netapp_nas_server_hostname
#netapp_server_hostname = <None>

# The TCP port to use for communication with the storage system or
# proxy server. If not specified, Data ONTAP drivers will use 80 for
# HTTP and 443 for HTTPS. (port value)
# Minimum value: 0
# Maximum value: 65535
#netapp_server_port = <None>

# The transport protocol used when communicating with the storage
# system or proxy server. Valid values are http or https. (string
# value)
# Deprecated group/name - [DEFAULT]/netapp_nas_transport_type
#netapp_transport_type = http

# Administrative user account name used to access the storage system.
# (string value)
# Deprecated group/name - [DEFAULT]/netapp_nas_login
#netapp_login = <None>

# Password for the administrative user account specified in the
# netapp_login option. (string value)
# Deprecated group/name - [DEFAULT]/netapp_nas_password
#netapp_password = <None>

# The NFS protocol versions that will be enabled. Supported values
# include nfs3, nfs4.0, nfs4.1. This option only applies when the
# option driver_handles_share_servers is set to True.  (list value)
#netapp_enabled_share_protocols = nfs3,nfs4.0

# NetApp volume name template. (string value)
# Deprecated group/name - [DEFAULT]/netapp_nas_volume_name_template
#netapp_volume_name_template = share_%(share_id)s

# Name template to use for new Vserver. (string value)
#netapp_vserver_name_template = os_%s

# Pattern for overriding the selection of network ports on which to
# create Vserver LIFs. (string value)
#netapp_port_name_search_pattern = (.*)

# Logical interface (LIF) name template (string value)
#netapp_lif_name_template = os_%(net_allocation_id)s

# Pattern for searching available aggregates for provisioning. (string
# value)
#netapp_aggregate_name_search_pattern = (.*)

# Name of aggregate to create Vserver root volumes on. This option
# only applies when the option driver_handles_share_servers is set to
# True. (string value)
#netapp_root_volume_aggregate = <None>

# Root volume name. (string value)
# Deprecated group/name - [DEFAULT]/netapp_root_volume_name
#netapp_root_volume = root

# The percentage of share space set aside as reserve for snapshot
# usage; valid values range from 0 to 90. (integer value)
# Minimum value: 0
# Maximum value: 90
#netapp_volume_snapshot_reserve_percent = 5

# The maximum time in seconds to wait for existing snapmirror
# transfers to complete before aborting when promoting a replica.
# (integer value)
# Minimum value: 0
#netapp_snapmirror_quiesce_timeout = 3600

# IP address of Nexenta storage appliance. (string value)
#nexenta_host = <None>

# Port to connect to Nexenta REST API server. (integer value)
#nexenta_rest_port = 8457

# Number of retries for unsuccessful API calls. (integer value)
#nexenta_retry_count = 6

# Use http or https for REST connection (default auto). (string value)
# Allowed values: http, https, auto
#nexenta_rest_protocol = auto

# User name to connect to Nexenta SA. (string value)
#nexenta_user = admin

# Password to connect to Nexenta SA. (string value)
#nexenta_password = <None>

# Volume name on NexentaStor. (string value)
#nexenta_volume = volume1

# Pool name on NexentaStor. (string value)
#nexenta_pool = pool1

# On if share over NFS is enabled. (boolean value)
#nexenta_nfs = true

# Parent folder on NexentaStor. (string value)
#nexenta_nfs_share = nfs_share

# Compression value for new ZFS folders. (string value)
# Allowed values: on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8, gzip-9, lzjb, zle, lz4
#nexenta_dataset_compression = on

# Deduplication value for new ZFS folders. (string value)
# Allowed values: on, off, sha256, verify, sha256, verify
#nexenta_dataset_dedupe = off

# If True shares will not be space guaranteed and overprovisioning
# will be enabled. (boolean value)
#nexenta_thin_provisioning = true

# Base directory that contains NFS share mount points. (string value)
#nexenta_mount_point_base = $state_path/mnt

# URL of the Quobyte API server (http or https) (string value)
#quobyte_api_url = <None>

# The X.509 CA file to verify the server cert. (string value)
#quobyte_api_ca = <None>

# Actually deletes shares (vs. unexport) (boolean value)
#quobyte_delete_shares = false

# Username for Quobyte API server. (string value)
#quobyte_api_username = admin

# Password for Quobyte API server (string value)
#quobyte_api_password = quobyte

# Name of volume configuration used for new shares. (string value)
#quobyte_volume_configuration = BASE

# Default owning user for new volumes. (string value)
#quobyte_default_volume_user = root

# Default owning group for new volumes. (string value)
#quobyte_default_volume_group = root

# User in service instance that will be used for authentication.
# (string value)
#service_instance_user = <None>

# Password for service instance user. (string value)
#service_instance_password = <None>

# Path to host's private key. (string value)
#path_to_private_key = <None>

# Maximum time in seconds to wait for creating service instance.
# (integer value)
#max_time_to_build_instance = 300

# Name or ID of service instance in Nova to use for share exports.
# Used only when share servers handling is disabled. (string value)
#service_instance_name_or_id = <None>

# Can be either name of network that is used by service instance
# within Nova to get IP address or IP address itself for managing
# shares there. Used only when share servers handling is disabled.
# (string value)
#service_net_name_or_ip = <None>

# Can be either name of network that is used by service instance
# within Nova to get IP address or IP address itself for exporting
# shares. Used only when share servers handling is disabled. (string
# value)
#tenant_net_name_or_ip = <None>

# Name of image in Glance, that will be used for service instance
# creation. Only used if driver_handles_share_servers=True. (string
# value)
#service_image_name = manila-service-image

# Name of service instance. Only used if
# driver_handles_share_servers=True. (string value)
#service_instance_name_template = manila_service_instance_%s

# Keypair name that will be created and used for service instances.
# Only used if driver_handles_share_servers=True. (string value)
#manila_service_keypair_name = manila-service

# Path to hosts public key. Only used if
# driver_handles_share_servers=True. (string value)
#path_to_public_key = ~/.ssh/id_rsa.pub

# Security group name, that will be used for service instance
# creation. Only used if driver_handles_share_servers=True. (string
# value)
#service_instance_security_group = manila-service

# ID of flavor, that will be used for service instance creation. Only
# used if driver_handles_share_servers=True. (integer value)
#service_instance_flavor_id = 100

# Name of manila service network. Used only with Neutron. Only used if
# driver_handles_share_servers=True. (string value)
#service_network_name = manila_service_network

# CIDR of manila service network. Used only with Neutron and if
# driver_handles_share_servers=True. (string value)
#service_network_cidr = 10.254.0.0/16

# This mask is used for dividing service network into subnets, IP
# capacity of subnet with this mask directly defines possible amount
# of created service VMs per tenant's subnet. Used only with Neutron
# and if driver_handles_share_servers=True. (integer value)
#service_network_division_mask = 28

# Vif driver. Used only with Neutron and if
# driver_handles_share_servers=True. (string value)
#interface_driver = manila.network.linux.interface.OVSInterfaceDriver

# Attach share server directly to share network. Used only with
# Neutron and if driver_handles_share_servers=True. (boolean value)
#connect_share_server_to_tenant_network = false

# Allowed values are ['nova', 'neutron']. Only used if
# driver_handles_share_servers=True. (string value)
#service_instance_network_helper_type = neutron

# ID of neutron network used to communicate with admin network, to
# create additional admin export locations on. (string value)
#admin_network_id = <None>

# ID of neutron subnet used to communicate with admin network, to
# create additional admin export locations on. Related to
# 'admin_network_id'. (string value)
#admin_subnet_id = <None>

# Tegile NAS server hostname or IP address. (string value)
#tegile_nas_server = <None>

# User name for the Tegile NAS server. (string value)
#tegile_nas_login = <None>

# Password for the Tegile NAS server. (string value)
#tegile_nas_password = <None>

# Create shares in this project (string value)
#tegile_default_project = <None>

# Path to the x509 certificate used for accessing the serviceinstance.
# (string value)
#winrm_cert_pem_path = ~/.ssl/cert.pem

# Path to the x509 certificate key. (string value)
#winrm_cert_key_pem_path = ~/.ssl/key.pem

# Use x509 certificates in order to authenticate to theservice
# instance. (boolean value)
#winrm_use_cert_based_auth = false

# WinRM connection timeout. (integer value)
#winrm_conn_timeout = 60

# WinRM operation timeout. (integer value)
#winrm_operation_timeout = 60

# WinRM retry count. (integer value)
#winrm_retry_count = 3

# WinRM retry interval in seconds (integer value)
#winrm_retry_interval = 5

# IP to be added to user-facing export location. Required. (string
# value)
#zfs_share_export_ip = <None>

# IP to be added to admin-facing export location. Required. (string
# value)
#zfs_service_ip = <None>

# Specify list of zpools that are allowed to be used by backend. Can
# contain nested datasets. Examples: Without nested dataset:
# 'zpool_name'. With nested dataset: 'zpool_name/nested_dataset_name'.
# Required. (list value)
#zfs_zpool_list = <None>

# Define here list of options that should be applied for each dataset
# creation if needed. Example: compression=gzip,dedup=off. Note that,
# for secondary replicas option 'readonly' will be set to 'on' and for
# active replicas to 'off' in any way. Also, 'quota' will be equal to
# share size. Optional. (list value)
#zfs_dataset_creation_options = <None>

# Prefix to be used in each dataset name. Optional. (string value)
#zfs_dataset_name_prefix = manila_share_

# Prefix to be used in each dataset snapshot name. Optional. (string
# value)
#zfs_dataset_snapshot_name_prefix = manila_share_snapshot_

# Remote ZFS storage hostname that should be used for SSH'ing.
# Optional. (boolean value)
#zfs_use_ssh = false

# SSH user that will be used in 2 cases: 1) By manila-share service in
# case it is located on different host than its ZFS storage. 2) By
# manila-share services with other ZFS backends that perform
# replication. It is expected that SSH'ing will be key-based,
# passwordless. This user should be passwordless sudoer. Optional.
# (string value)
#zfs_ssh_username = <None>

# Password for user that is used for SSH'ing ZFS storage host. Not
# used for replication operations. They require passwordless SSH
# access. Optional. (string value)
#zfs_ssh_user_password = <None>

# Path to SSH private key that should be used for SSH'ing ZFS storage
# host. Not used for replication operations. Optional. (string value)
#zfs_ssh_private_key_path = <None>

# Specify list of share export helpers for ZFS storage. It should look
# like following:
# 'FOO_protocol=foo.FooClass,BAR_protocol=bar.BarClass'. Required.
# (list value)
#zfs_share_helpers = NFS=manila.share.drivers.zfsonlinux.utils.NFSviaZFSHelper

# Set snapshot prefix for usage in ZFS replication. Required. (string
# value)
#zfs_replica_snapshot_prefix = tmp_snapshot_for_replication_

# Set snapshot prefix for usage in ZFS migration. Required. (string
# value)
#zfs_migration_snapshot_prefix = tmp_snapshot_for_share_migration_

# ZFSSA management IP address. (string value)
#zfssa_host = <None>

# IP address for data. (string value)
#zfssa_data_ip = <None>

# ZFSSA management authorized username. (string value)
#zfssa_auth_user = <None>

# ZFSSA management authorized userpassword. (string value)
#zfssa_auth_password = <None>

# ZFSSA storage pool name. (string value)
#zfssa_pool = <None>

# ZFSSA project name. (string value)
#zfssa_project = <None>

# Controls checksum used for data blocks. (string value)
#zfssa_nas_checksum = fletcher4

# Data compression-off, lzjb, gzip-2, gzip, gzip-9. (string value)
#zfssa_nas_compression = off

# Controls behavior when servicing synchronous writes. (string value)
#zfssa_nas_logbias = latency

# Location of project in ZFS/SA. (string value)
#zfssa_nas_mountpoint =

# Controls whether a share quota includes snapshot. (string value)
#zfssa_nas_quota_snap = true

# Controls whether file ownership can be changed. (string value)
#zfssa_nas_rstchown = true

# Controls whether the share is scanned for viruses. (string value)
#zfssa_nas_vscan = false

# REST connection timeout (in seconds). (string value)
#zfssa_rest_timeout = <None>

# Driver policy for share manage. A strict policy checks for a schema
# named manila_managed, and makes sure its value is true. A loose
# policy does not check for the schema. (string value)
# Allowed values: loose, strict
#zfssa_manage_policy = loose

# Whether to enable pre hooks or not. (boolean value)
# Deprecated group/name - [DEFAULT]/enable_pre_hooks
#enable_pre_hooks = false

# Whether to enable post hooks or not. (boolean value)
# Deprecated group/name - [DEFAULT]/enable_post_hooks
#enable_post_hooks = false

# Whether to enable periodic hooks or not. (boolean value)
# Deprecated group/name - [DEFAULT]/enable_periodic_hooks
#enable_periodic_hooks = false

# Whether to suppress pre hook errors (allow driver perform actions)
# or not. (boolean value)
# Deprecated group/name - [DEFAULT]/suppress_pre_hooks_errors
#suppress_pre_hooks_errors = false

# Whether to suppress post hook errors (allow driver's results to pass
# through) or not. (boolean value)
# Deprecated group/name - [DEFAULT]/suppress_post_hooks_errors
#suppress_post_hooks_errors = false

# Interval in seconds between execution of periodic hooks. Used when
# option 'enable_periodic_hooks' is set to True. Default is 300.
# (floating point value)
# Deprecated group/name - [DEFAULT]/periodic_hooks_interval
#periodic_hooks_interval = 300.0

# Driver to use for share creation. (string value)
#share_driver = manila.share.drivers.generic.GenericShareDriver

# Driver(s) to perform some additional actions before and after share
# driver actions and on a periodic basis. Default is []. (list value)
# Deprecated group/name - [DEFAULT]/hook_drivers
#hook_drivers =

# Whether share servers will be deleted on deletion of the last share.
# (boolean value)
#delete_share_server_with_last_share = false

# If set to True, then manila will deny access and remove all access
# rules on share unmanage.If set to False - nothing will be changed.
# (boolean value)
#unmanage_remove_access_rules = false

# If set to True, then Manila will delete all share servers which were
# unused more than specified time .If set to False - automatic
# deletion of share servers will be disabled. (boolean value)
# Deprecated group/name - [DEFAULT]/automatic_share_server_cleanup
#automatic_share_server_cleanup = true

# Unallocated share servers reclamation time interval (minutes).
# Minimum value is 10 minutes, maximum is 60 minutes. The reclamation
# function is run every 10 minutes and delete share servers which were
# unused more than unused_share_server_cleanup_interval option
# defines. This value reflects the shortest time Manila will wait for
# a share server to go unutilized before deleting it. (integer value)
# Minimum value: 10
# Maximum value: 60
# Deprecated group/name - [DEFAULT]/unused_share_server_cleanup_interval
#unused_share_server_cleanup_interval = 10

# This value, specified in seconds, determines how often the share
# manager will poll for the health (replica_state) of each replica
# instance. (integer value)
#replica_state_update_interval = 300

# This value, specified in seconds, determines how often the share
# manager will poll the driver to perform the next step of migration
# in the storage backend, for a migrating share. (integer value)
#migration_driver_continue_update_interval = 60

# The full class name of the Volume API class to use. (string value)
#volume_api_class = manila.volume.cinder.API

# Maximum line size of message headers to be accepted. Option
# max_header_line may need to be increased when using large tokens
# (typically those generated by the Keystone v3 API with big service
# catalogs). (integer value)
#max_header_line = 16384

# Timeout for client connections socket operations. If an incoming
# connection is idle for this number of seconds it will be closed. A
# value of '0' means wait forever. (integer value)
#client_socket_timeout = 900

# If False, closes the client socket connection explicitly. Setting it
# to True to maintain backward compatibility. Recommended setting is
# set it to False. (boolean value)
#wsgi_keep_alive = true

# Number of backlog requests to configure the socket with. (integer
# value)
#backlog = 4096

# Sets the value of TCP_KEEPALIVE (True/False) for each server socket.
# (boolean value)
#tcp_keepalive = true

# Sets the value of TCP_KEEPIDLE in seconds for each server socket.
# Not supported on OS X. (integer value)
#tcp_keepidle = 600

# Sets the value of TCP_KEEPINTVL in seconds for each server socket.
# Not supported on OS X. (integer value)
#tcp_keepalive_interval = <None>

# Sets the value of TCP_KEEPCNT for each server socket. Not supported
# on OS X. (integer value)
#tcp_keepalive_count = <None>

# CA certificate file to use to verify connecting clients. (string
# value)
#ssl_ca_file = <None>

# Certificate file to use when starting the server securely. (string
# value)
#ssl_cert_file = <None>

# Private key file to use when starting the server securely. (string
# value)
#ssl_key_file = <None>

# If set to true, the logging level will be set to DEBUG instead of
# the default INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to
# WARNING instead of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to
# any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation.
# Note that when logging configuration files are used then all logging
# configuration is set in the configuration file and other logging
# configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default
# is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths.
# This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is
# moved or removed this handler will open a new log file with
# specified path instantaneously. It makes sense only if log_file
# option is specified and Linux platform is used. This option is
# ignored if log_config_append is set. (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and
# will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if
# log_config_append is set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined.
# (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the
# message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string
# value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is
# ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message.
# (string value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message.
# (string value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer
# value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer
# value)
#conn_pool_ttl = 1200

# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve to this
# address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic.
# Default is unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address.
# Must match "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no
# linger period. Pending messages shall be discarded immediately when
# the socket is closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises
# timeout exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about
# existing target ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing
# target. (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
# (boolean value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with
# ZMQBindError. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means
# not keeping a queue when server side disconnects. False means to
# keep queue and messages even if server is disconnected, when the
# server appears we send all accumulated messages to it. (boolean
# value)
#zmq_immediate = false

# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# A URL representing the messaging driver to use and its full
# configuration. (string value)
#transport_url = <None>

# DEPRECATED: The messaging driver to use, defaults to rabbit. Other
# drivers include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit

# The default exchange under which topics are scoped. May be
# overridden by an exchange name specified in the transport_url
# option. (string value)
#control_exchange = openstack


[cinder]

#
# From manila
#

# Allow attaching between instances and volumes in different
# availability zones. (boolean value)
# Deprecated group/name - [DEFAULT]/cinder_cross_az_attach
#cross_az_attach = true

# Location of CA certificates file to use for cinder client requests.
# (string value)
# Deprecated group/name - [DEFAULT]/cinder_ca_certificates_file
#ca_certificates_file = <None>

# Number of cinderclient retries on failed HTTP calls. (integer value)
# Deprecated group/name - [DEFAULT]/cinder_http_retries
#http_retries = 3

# Allow to perform insecure SSL requests to cinder. (boolean value)
# Deprecated group/name - [DEFAULT]/cinder_api_insecure
#api_insecure = false

# Authentication URL (string value)
#auth_url = <None>

# Authentication type to load (string value)
# Deprecated group/name - [cinder]/auth_plugin
#auth_type = <None>

# PEM encoded Certificate Authority to use when verifying HTTPs
# connections. (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will
# be used for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_name = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# User's password (string value)
#password = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Project ID to scope to (string value)
# Deprecated group/name - [cinder]/tenant-id
#project_id = <None>

# Project name to scope to (string value)
# Deprecated group/name - [cinder]/tenant-name
#project_name = <None>

# Timeout value for http requests (integer value)
#timeout = <None>

# Trust ID (string value)
#trust_id = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User id (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [cinder]/user-name
#username = <None>


[cors]

#
# From manila
#

# Indicate whether this resource may be shared with the domain
# received in the requests "origin" header. Format:
# "<protocol>://<host>[:<port>]", no trailing slash. Example:
# https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials
# (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to
# HTTP Simple Headers. (list value)
#expose_headers =

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list
# value)
#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH

# Indicate which header field names may be used during the actual
# request. (list value)
#allow_headers =

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain
# received in the requests "origin" header. Format:
# "<protocol>://<host>[:<port>]", no trailing slash. Example:
# https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials
# (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to
# HTTP Simple Headers. (list value)
#expose_headers = X-Auth-Token,X-OpenStack-Request-ID,X-Openstack-Manila-Api-Version,X-OpenStack-Manila-API-Experimental,X-Subject-Token,X-Service-Token

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list
# value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual
# request. (list value)
#allow_headers = X-Auth-Token,X-OpenStack-Request-ID,X-Openstack-Manila-Api-Version,X-OpenStack-Manila-API-Experimental,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id


[cors.subdomain]

#
# From manila
#

# Indicate whether this resource may be shared with the domain
# received in the requests "origin" header. Format:
# "<protocol>://<host>[:<port>]", no trailing slash. Example:
# https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials
# (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to
# HTTP Simple Headers. (list value)
#expose_headers =

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list
# value)
#allow_methods = OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,PATCH

# Indicate which header field names may be used during the actual
# request. (list value)
#allow_headers =

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain
# received in the requests "origin" header. Format:
# "<protocol>://<host>[:<port>]", no trailing slash. Example:
# https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials
# (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to
# HTTP Simple Headers. (list value)
#expose_headers = X-Auth-Token,X-OpenStack-Request-ID,X-Openstack-Manila-Api-Version,X-OpenStack-Manila-API-Experimental,X-Subject-Token,X-Service-Token

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list
# value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual
# request. (list value)
#allow_headers = X-Auth-Token,X-OpenStack-Request-ID,X-Openstack-Manila-Api-Version,X-OpenStack-Manila-API-Experimental,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id


[database]

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to
# connect the database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database.
# (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave
# database. (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including
# the default, overrides any server-set SQL mode. To use whatever SQL
# mode is set by the server configuration, set this to no value.
# Example: mysql_sql_mode= (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a
# value of 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to
# -1 to specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything.
# (integer value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer
# value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection
# lost. (boolean value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database
# operation up to db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries
# of a database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before
# error is raised. Set to -1 to specify an infinite retry count.
# (integer value)
#db_max_retries = 20

#
# From oslo.db.concurrency
#

# Enable the experimental use of thread pooling for all DB API calls
# (boolean value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false


[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete "public" Identity API endpoint. This endpoint should not be
# an "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to
# authenticate. Although this endpoint should  ideally be unversioned,
# client support in the wild varies.  If you're using a versioned v2
# endpoint here, then this  should *not* be the same endpoint the
# service user utilizes  for validating tokens, because normal end
# users may not be  able to reach that endpoint. (string value)
#auth_uri = <None>

# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>

# Do not handle authorization requests within the middleware, but
# delegate the authorization decision to downstream WSGI components.
# (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server.
# (integer value)
#http_connect_timeout = <None>

# How many times are we trying to reconnect when communicating with
# Identity API Server. (integer value)
#http_request_max_retries = 3

# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this
# option to have the middleware share a caching backend with swift.
# Otherwise, use the ``memcached_servers`` option instead. (string
# value)
#cache = <None>

# Required if identity server requires client certificate (string
# value)
#certfile = <None>

# Required if identity server requires client certificate (string
# value)
#keyfile = <None>

# A PEM encoded Certificate Authority to use when verifying HTTPs
# connections. Defaults to system CAs. (string value)
#cafile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# The region in which the identity server can be found. (string value)
#region_name = <None>

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>

# Optionally specify a list of memcached server(s) to use for caching.
# If left undefined, tokens will instead be cached in-process. (list
# value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>

# In order to prevent excessive effort spent validating tokens, the
# middleware caches previously-seen tokens for a configurable duration
# (in seconds). Set to -1 to disable caching completely. (integer
# value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is
# retrieved from the Identity service (in seconds). A high number of
# revocation events combined with a low cache duration may
# significantly reduce performance. Only valid for PKI tokens.
# (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be
# authenticated or authenticated and encrypted. If MAC, token data is
# authenticated (with HMAC) in the cache. If ENCRYPT, token data is
# encrypted and authenticated in the cache. If the value is not one of
# these options or empty, auth_token will raise an exception on
# initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None

# (Optional, mandatory if memcache_security_strategy is defined) This
# string is used for key derivation. (string value)
#memcache_secret_key = <None>

# (Optional) Number of seconds memcached server is considered dead
# before it is tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every
# memcached server. (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a
# memcached server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held
# unused in the pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a
# memcached client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcached client pool.
# The advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If
# False, middleware will not ask for service catalog on token
# validation and will not set the X-Service-Catalog header. (boolean
# value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to:
# "disabled" to not check token binding. "permissive" (default) to
# validate binding information if the bind type is of a form known to
# the server and ignore it if not. "strict" like "permissive" but if
# the bind type is unknown the token will be rejected. "required" any
# form of token binding is needed to be allowed. Finally the name of a
# binding method that must be present in tokens. (string value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This
# requires that PKI tokens are configured on the identity server.
# (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single
# algorithm or multiple. The algorithms are those supported by Python
# standard hashlib.new(). The hashes will be tried in the order given,
# so put the preferred one first for performance. The result of the
# first hash will be stored in the cache. This will typically be set
# to multiple values only while migrating from a less secure algorithm
# to a more secure one. Once all the old tokens are expired this
# option should be set to a single value for better performance. (list
# value)
#hash_algorithms = md5

# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string
# value)
#auth_section = <None>


[matchmaker_redis]

#
# From oslo.messaging
#

# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1

# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379

# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =

# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =

# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq

# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000

# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000

# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000


[neutron]

#
# From manila
#

# URL for connecting to neutron. (string value)
# Deprecated group/name - [DEFAULT]/neutron_url
#url = http://127.0.0.1:9696

# Timeout value for connecting to neutron in seconds. (integer value)
# Deprecated group/name - [DEFAULT]/neutron_url_timeout
#url_timeout = 30

# If set, ignore any SSL validation issues. (boolean value)
# Deprecated group/name - [DEFAULT]/api_insecure
#api_insecure = false

# Auth strategy for connecting to neutron in admin context. (string
# value)
# Deprecated group/name - [DEFAULT]/auth_strategy
#auth_strategy = keystone

# DEPRECATED: Location of CA certificates file to use for neutron
# client requests. (string value)
# Deprecated group/name - [DEFAULT]/ca_certificates_file
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#ca_certificates_file = <None>

# Region name for connecting to neutron in admin context (string
# value)
#region_name = <None>

# Authentication URL (string value)
#auth_url = <None>

# Authentication type to load (string value)
# Deprecated group/name - [neutron]/auth_plugin
#auth_type = <None>

# PEM encoded Certificate Authority to use when verifying HTTPs
# connections. (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will
# be used for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_name = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# User's password (string value)
#password = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Project ID to scope to (string value)
# Deprecated group/name - [neutron]/tenant-id
#project_id = <None>

# Project name to scope to (string value)
# Deprecated group/name - [neutron]/tenant-name
#project_name = <None>

# Timeout value for http requests (integer value)
#timeout = <None>

# Trust ID (string value)
#trust_id = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User id (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [neutron]/user-name
#username = <None>


[nova]

#
# From manila
#

# Version of Nova API to be used. (string value)
# Deprecated group/name - [DEFAULT]/nova_api_microversion
#api_microversion = 2.10

# Location of CA certificates file to use for nova client requests.
# (string value)
# Deprecated group/name - [DEFAULT]/nova_ca_certificates_file
#ca_certificates_file = <None>

# Allow to perform insecure SSL requests to nova. (boolean value)
# Deprecated group/name - [DEFAULT]/nova_api_insecure
#api_insecure = false

# Authentication URL (string value)
#auth_url = <None>

# Authentication type to load (string value)
# Deprecated group/name - [nova]/auth_plugin
#auth_type = <None>

# PEM encoded Certificate Authority to use when verifying HTTPs
# connections. (string value)
#cafile = <None>

# PEM encoded client certificate cert file (string value)
#certfile = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will
# be used for both the user and project domain in v3 and ignored in v2
# authentication. (string value)
#default_domain_name = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# PEM encoded client certificate key file (string value)
#keyfile = <None>

# User's password (string value)
#password = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Project ID to scope to (string value)
# Deprecated group/name - [nova]/tenant-id
#project_id = <None>

# Project name to scope to (string value)
# Deprecated group/name - [nova]/tenant-name
#project_name = <None>

# Timeout value for http requests (integer value)
#timeout = <None>

# Trust ID (string value)
#trust_id = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User id (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [nova]/user-name
#username = <None>


[oslo_concurrency]

#
# From manila
#

# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false

# Directory to use for lock files.  For security, the specified
# directory should only be writable by the user running the processes
# that need locking. Defaults to environment variable OSLO_LOCK_PATH.
# If external locks are used, a lock path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a
# generated UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false

# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =

# Identifying certificate PEM file to present to clients (string
# value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string
# value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>

# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false

# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string
# value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =

# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =

# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after
# each unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval +
# connection_retry_backoff (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due
# to a recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The deadline for an rpc reply message delivery. Only used when
# caller does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used
# when caller does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used
# when caller does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not
# support routing otherwise use routable addressing (string value)
#addressing_mode = dynamic

# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string
# value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used
# by the message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular
# RPC/Notification server. Used by the message bus to identify
# messages sent to a single destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers.
# Used by the message bus to identify messages that should be
# delivered in a round-robin fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If
# not set, we fall back to the same configuration used for RPC.
# (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false

# SSL version to use (valid only if SSL enabled). Valid values are
# TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be
# available on some distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled).
# (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP consumer
# cancel notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression
# will not be used. This option may not be available in future
# versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its
# replies. This value should not be longer than rpc_response_timeout.
# (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we
# are currently connected to becomes unavailable. Takes effect only if
# more than one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin

# DEPRECATED: The RabbitMQ broker address where a single node is used.
# (string value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost

# DEPRECATED: The RabbitMQ broker port where a single node is used.
# (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672

# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false

# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest

# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest

# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN

# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ.
# (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30
# seconds. (integer value)
#rabbit_interval_max = 30

# DEPRECATED: Maximum number of RabbitMQ connection retries. Default
# is 0 (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change
# this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0,
# queue mirroring is no longer controlled by the x-ha-policy argument
# when declaring a queue. If you just want to make sure that all
# queues (except  those with auto-generated names) are mirrored across
# all nodes, run: "rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-
# mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL
# (x-expires). Queues which are unused for the duration of the TTL are
# automatically deleted. The parameter affects only reply and fanout
# queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down
# if heartbeat's keep-alive fails (0 disable the heartbeat).
# EXPERIMENTAL (integer value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
# (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false

# Maximum number of channels to allow (integer value)
#channel_max = <None>

# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>

# How often to send heartbeats for consumer's connections (integer
# value)
#heartbeat_interval = 3

# Enable SSL (boolean value)
#ssl = <None>

# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>

# Set socket timeout in seconds for connection's socket (floating
# point value)
#socket_timeout = 0.25

# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating
# point value)
#tcp_user_timeout = 0.25

# Set delay for reconnection to some host which has connection error
# (floating point value)
#host_connection_reconnect_delay = 0.25

# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single

# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30

# Maximum number of connections to create above `pool_max_size`.
# (integer value)
#pool_max_overflow = 0

# Default number of seconds to wait for a connections to available
# (integer value)
#pool_timeout = 30

# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer
# value)
#pool_recycle = 600

# Threshold at which inactive (since release) connections are
# considered stale in seconds or None for no staleness. Stale
# connections are closed on acquire. (integer value)
#pool_stale = 60

# Persist notification messages. (boolean value)
#notification_persistence = false

# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during
# sending notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during
# sending notification message (floating point value)
#notification_retry_delay = 0.25

# Time to live for rpc queues without consumers in seconds. (integer
# value)
#rpc_queue_expiration = 60

# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc

# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply

# Max number of not acknowledged message which RabbitMQ can send to
# rpc listener. (integer value)
#rpc_listener_prefetch_count = 100

# Max number of not acknowledged message which RabbitMQ can send to
# rpc reply listener. (integer value)
#rpc_reply_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during
# sending reply. -1 means infinite retry during rpc_timeout (integer
# value)
#rpc_reply_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during
# sending reply. (floating point value)
#rpc_reply_retry_delay = 0.25

# Reconnecting retry count in case of connectivity problem during
# sending RPC message, -1 means infinite retry. If actual retry
# attempts in not 0 the rpc request could be processed more then one
# time (integer value)
#default_rpc_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during
# sending RPC message (floating point value)
#rpc_retry_delay = 0.25


[oslo_messaging_zmq]

#
# From oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve to this
# address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic.
# Default is unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address.
# Must match "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no
# linger period. Pending messages shall be discarded immediately when
# the socket is closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises
# timeout exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about
# existing target ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing
# target. (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy.
# (boolean value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with
# ZMQBindError. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means
# not keeping a queue when server side disconnects. False means to
# keep queue and messages even if server is disconnected, when the
# server appears we send all accumulated messages to it. (boolean
# value)
#zmq_immediate = false


[oslo_middleware]

#
# From manila
#

# The maximum body size for each  request, in bytes. (integer value)
# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
# Deprecated group/name - [DEFAULT]/max_request_body_size
#max_request_body_size = 114688

# DEPRECATED: The HTTP Header that will be used to determine what the
# original request protocol scheme was, even if it was hidden by a SSL
# termination proxy. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#secure_proxy_ssl_header = X-Forwarded-Proto

# Whether the application is behind a proxy or not. This determines if
# the middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false

#
# From oslo.middleware.http_proxy_to_wsgi
#

# Whether the application is behind a proxy or not. This determines if
# the middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false


[oslo_policy]

#
# From manila
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string
# value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be
# relative to any directory in the search path defined by the
# config_dir option, or absolute paths. The file defined by
# policy_file must exist for these directories to be searched.
# Missing or empty directories are ignored. (multi valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
api-paste.ini
policy.json

The policy.json file defines additional access controls that apply to the Shared File Systems service.

{
    "context_is_admin": "role:admin",
    "admin_or_owner": "is_admin:True or project_id:%(project_id)s",
    "default": "rule:admin_or_owner",

    "admin_api": "is_admin:True",

    "availability_zone:index": "rule:default",

    "quota_set:update": "rule:admin_api",
    "quota_set:show": "rule:default",
    "quota_set:delete": "rule:admin_api",

    "quota_class_set:show": "rule:default",
    "quota_class_set:update": "rule:admin_api",

    "service:index": "rule:admin_api",
    "service:update": "rule:admin_api",

    "share:create": "",
    "share:delete": "rule:default",
    "share:get": "rule:default",
    "share:get_all": "rule:default",
    "share:list_by_share_server_id": "rule:admin_api",
    "share:update": "rule:default",
    "share:access_get": "rule:default",
    "share:access_get_all": "rule:default",
    "share:allow_access": "rule:default",
    "share:deny_access": "rule:default",
    "share:extend": "rule:default",
    "share:shrink": "rule:default",
    "share:get_share_metadata": "rule:default",
    "share:delete_share_metadata": "rule:default",
    "share:update_share_metadata": "rule:default",
    "share:migration_start": "rule:admin_api",
    "share:migration_complete": "rule:admin_api",
    "share:migration_cancel": "rule:admin_api",
    "share:migration_get_progress": "rule:admin_api",
    "share:reset_task_state": "rule:admin_api",
    "share:manage": "rule:admin_api",
    "share:unmanage": "rule:admin_api",
    "share:force_delete": "rule:admin_api",
    "share:reset_status": "rule:admin_api",
    "share_export_location:index": "rule:default",
    "share_export_location:show": "rule:default",

    "share_instance:index": "rule:admin_api",
    "share_instance:show": "rule:admin_api",
    "share_instance:force_delete": "rule:admin_api",
    "share_instance:reset_status": "rule:admin_api",
    "share_instance_export_location:index": "rule:admin_api",
    "share_instance_export_location:show": "rule:admin_api",

    "share_snapshot:create_snapshot": "rule:default",
    "share_snapshot:delete_snapshot": "rule:default",
    "share_snapshot:get_snapshot": "rule:default",
    "share_snapshot:get_all_snapshots": "rule:default",
    "share_snapshot:snapshot_update": "rule:default",
    "share_snapshot:manage_snapshot": "rule:admin_api",
    "share_snapshot:unmanage_snapshot": "rule:admin_api",
    "share_snapshot:force_delete": "rule:admin_api",
    "share_snapshot:reset_status": "rule:admin_api",

    "share_snapshot_instance:detail": "rule:admin_api",
    "share_snapshot_instance:index": "rule:admin_api",
    "share_snapshot_instance:show": "rule:admin_api",
    "share_snapshot_instance:reset_status": "rule:admin_api",

    "share_type:index": "rule:default",
    "share_type:show": "rule:default",
    "share_type:default": "rule:default",
    "share_type:create": "rule:admin_api",
    "share_type:delete": "rule:admin_api",
    "share_type:add_project_access": "rule:admin_api",
    "share_type:list_project_access": "rule:admin_api",
    "share_type:remove_project_access": "rule:admin_api",

    "share_types_extra_spec:create": "rule:admin_api",
    "share_types_extra_spec:update": "rule:admin_api",
    "share_types_extra_spec:show": "rule:admin_api",
    "share_types_extra_spec:index": "rule:admin_api",
    "share_types_extra_spec:delete": "rule:admin_api",

    "security_service:create": "rule:default",
    "security_service:delete": "rule:default",
    "security_service:update": "rule:default",
    "security_service:show": "rule:default",
    "security_service:index": "rule:default",
    "security_service:detail": "rule:default",
    "security_service:get_all_security_services": "rule:admin_api",

    "share_server:index": "rule:admin_api",
    "share_server:show": "rule:admin_api",
    "share_server:details": "rule:admin_api",
    "share_server:delete": "rule:admin_api",

    "share_network:create": "rule:default",
    "share_network:delete": "rule:default",
    "share_network:update": "rule:default",
    "share_network:index": "rule:default",
    "share_network:detail": "rule:default",
    "share_network:show": "rule:default",
    "share_network:add_security_service": "rule:default",
    "share_network:remove_security_service": "rule:default",
    "share_network:get_all_share_networks": "rule:admin_api",

    "scheduler_stats:pools:index": "rule:admin_api",
    "scheduler_stats:pools:detail": "rule:admin_api",

    "consistency_group:create" : "rule:default",
    "consistency_group:delete": "rule:default",
    "consistency_group:update": "rule:default",
    "consistency_group:get": "rule:default",
    "consistency_group:get_all": "rule:default",
    "consistency_group:force_delete": "rule:admin_api",
    "consistency_group:reset_status": "rule:admin_api",

    "cgsnapshot:force_delete": "rule:admin_api",
    "cgsnapshot:reset_status": "rule:admin_api",
    "cgsnapshot:create" : "rule:default",
    "cgsnapshot:update" : "rule:default",
    "cgsnapshot:delete": "rule:default",
    "cgsnapshot:get_cgsnapshot": "rule:default",
    "cgsnapshot:get_all": "rule:default",

    "share_replica:get_all": "rule:default",
    "share_replica:show": "rule:default",
    "share_replica:create" : "rule:default",
    "share_replica:delete": "rule:default",
    "share_replica:promote": "rule:default",
    "share_replica:resync": "rule:admin_api",
    "share_replica:reset_status": "rule:admin_api",
    "share_replica:force_delete": "rule:admin_api",
    "share_replica:reset_replica_state": "rule:admin_api"
}
rootwrap.conf

The rootwrap.conf file defines configuration values used by the rootwrap script when the Shared File Systems service must escalate its privileges to those of the root user.

# Configuration for manila-rootwrap
# This file should be owned by (and only-writeable by) the root user

[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/manila/rootwrap.d,/usr/share/manila/rootwrap

# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/sbin,/usr/local/bin,/usr/lpp/mmfs/bin

# Enable logging to syslog
# Default value is False
use_syslog=False

# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, user0, user1...
# Default value is 'syslog'
syslog_log_facility=syslog

# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR

New, updated, and deprecated options in Newton for Shared File Systems service

New options
Option = default value (Type) Help string
[DEFAULT] check_hash = False (BoolOpt) Chooses whether hash of each file should be checked on data copying.
[DEFAULT] container_volume_group = manila_docker_volumes (StrOpt) LVM volume group to use for volumes. This volume group must be created by the cloud administrator independently from manila operations.
[DEFAULT] data_node_access_admin_user = None (StrOpt) The admin user name registered in the security service in order to allow access to user authentication-based shares.
[DEFAULT] data_node_mount_options = {} (DictOpt) Mount options to be included in the mount command for share protocols. Use dictionary format, example: {‘nfs’: ‘-o nfsvers=3’, ‘cifs’: ‘-o user=foo,pass=bar’}
[DEFAULT] emc_interface_ports = None (ListOpt) Comma separated list specifying the ports that can be used for share server interfaces. Members of the list can be Unix-style glob expressions.
[DEFAULT] emc_nas_server_pool = None (StrOpt) Pool to persist the meta-data of NAS server.
[DEFAULT] filter_function = None (StrOpt) String representation for an equation that will be used to filter hosts.
[DEFAULT] goodness_function = None (StrOpt) String representation for an equation that will be used to determine the goodness of a host.
[DEFAULT] hitachi_hnas_allow_cifs_snapshot_while_mounted = False (BoolOpt) By default, CIFS snapshots are not allowed to be taken when the share has clients connected because consistent point-in-time replica cannot be guaranteed for all files. Enabling this might cause inconsistent snapshots on CIFS shares.
[DEFAULT] hitachi_hnas_cluster_admin_ip0 = None (StrOpt) The IP of the clusters admin node. Only set in HNAS multinode clusters.
[DEFAULT] hitachi_hnas_driver_helper = manila.share.drivers.hitachi.hnas.ssh.HNASSSHBackend (StrOpt) Python class to be used for driver helper.
[DEFAULT] hitachi_hnas_evs_id = None (IntOpt) Specify which EVS this backend is assigned to.
[DEFAULT] hitachi_hnas_evs_ip = None (StrOpt) Specify IP for mounting shares.
[DEFAULT] hitachi_hnas_file_system_name = None (StrOpt) Specify file-system name for creating shares.
[DEFAULT] hitachi_hnas_ip = None (StrOpt) HNAS management interface IP for communication between Manila controller and HNAS.
[DEFAULT] hitachi_hnas_password = None (StrOpt) HNAS user password. Required only if private key is not provided.
[DEFAULT] hitachi_hnas_ssh_private_key = None (StrOpt) RSA/DSA private key value used to connect into HNAS. Required only if password is not provided.
[DEFAULT] hitachi_hnas_stalled_job_timeout = 30 (IntOpt) The time (in seconds) to wait for stalled HNAS jobs before aborting.
[DEFAULT] hitachi_hnas_user = None (StrOpt) HNAS username Base64 String in order to perform tasks such as create file-systems and network interfaces.
[DEFAULT] is_gpfs_node = False (BoolOpt) True:when Manila services are running on one of the Spectrum Scale node. False:when Manila services are not running on any of the Spectrum Scale node.
[DEFAULT] migration_driver_continue_update_interval = 60 (IntOpt) This value, specified in seconds, determines how often the share manager will poll the driver to perform the next step of migration in the storage backend, for a migrating share.
[DEFAULT] mount_tmp_location = /tmp/ (StrOpt) Temporary path to create and mount shares during migration.
[DEFAULT] netapp_enabled_share_protocols = nfs3, nfs4.0 (ListOpt) The NFS protocol versions that will be enabled. Supported values include nfs3, nfs4.0, nfs4.1. This option only applies when the option driver_handles_share_servers is set to True.
[DEFAULT] protocol_access_mapping = {'ip': ['nfs'], 'user': ['cifs']} (DictOpt) Protocol access mapping for this backend. Should be a dictionary comprised of {‘access_type1’: [‘share_proto1’, ‘share_proto2’], ‘access_type2’: [‘share_proto2’, ‘share_proto3’]}.
[DEFAULT] zfs_migration_snapshot_prefix = tmp_snapshot_for_share_migration_ (StrOpt) Set snapshot prefix for usage in ZFS migration. Required.
[DEFAULT] zfssa_manage_policy = loose (StrOpt) Driver policy for share manage. A strict policy checks for a schema named manila_managed, and makes sure its value is true. A loose policy does not check for the schema.
New default values
Option Previous default value New default value
[DEFAULT] emc_nas_server_container server_2 None
[DEFAULT] gpfs_share_helpers KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper, GNFS=manila.share.drivers.ibm.gpfs.GNFSHelper KNFS=manila.share.drivers.ibm.gpfs.KNFSHelper, CES=manila.share.drivers.ibm.gpfs.CESHelper
[DEFAULT] host localhost <your_hostname>
[DEFAULT] hpe3par_fpg OpenStack None
[DEFAULT] my_ip 10.0.0.1 <your_ip>
[DEFAULT] scheduler_default_filters AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter, ConsistencyGroupFilter, ShareReplicationFilter AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter, ConsistencyGroupFilter, DriverFilter, ShareReplicationFilter
[DEFAULT] scheduler_default_weighers CapacityWeigher CapacityWeigher, GoodnessWeigher
[DEFAULT] share_mount_template mount -vt %(proto)s %(export)s %(path)s mount -vt %(proto)s %(options)s %(export)s %(path)s
Deprecated options
Deprecated option New Option
[DEFAULT] db_backend [database] backend
[DEFAULT] hds_hnas_driver_helper [DEFAULT] hitachi_hnas_driver_helper
[DEFAULT] hpe3par_share_mount_path [DEFAULT] hpe3par_share_mount_path
[DEFAULT] migration_tmp_location [DEFAULT] mount_tmp_location
[DEFAULT] sql_idle_timeout [database] idle_timeout
[DEFAULT] sql_max_retries [database] max_retries
[DEFAULT] sql_retry_interval [database] retry_interval
[DEFAULT] use_syslog None

The Shared File Systems service works with many different drivers that you can configure by using these instructions.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Telemetry service

Telemetry configuration options

The following tables provide a comprehensive list of the Telemetry configuration options.

Description of API configuration options
Configuration option = Default value Description
[DEFAULT]  
api_paste_config = api_paste.ini (String) Configuration file for WSGI definition of API.
event_pipeline_cfg_file = event_pipeline.yaml (String) Configuration file for event pipeline definition.
pipeline_cfg_file = pipeline.yaml (String) Configuration file for pipeline definition.
pipeline_polling_interval = 20 (Integer) Polling interval for pipeline file configuration in seconds.
refresh_event_pipeline_cfg = False (Boolean) Refresh Event Pipeline configuration on-the-fly.
refresh_pipeline_cfg = False (Boolean) Refresh Pipeline configuration on-the-fly.
reserved_metadata_keys = (List) List of metadata keys reserved for metering use. And these keys are additional to the ones included in the namespace.
reserved_metadata_length = 256 (Integer) Limit on length of reserved metadata values.
reserved_metadata_namespace = metering. (List) List of metadata prefixes reserved for metering use.
[api]  
aodh_is_enabled = None (Boolean) Set True to redirect alarms URLs to aodh. Default autodetection by querying keystone.
aodh_url = None (String) The endpoint of Aodh to redirect alarms URLs to Aodh API. Default autodetection by querying keystone.
default_api_return_limit = 100 (Integer) Default maximum number of items returned by API request.
gnocchi_is_enabled = None (Boolean) Set True to disable resource/meter/sample URLs. Default autodetection by querying keystone.
panko_is_enabled = None (Boolean) Set True to redirect events URLs to Panko. Default autodetection by querying keystone.
panko_url = None (String) The endpoint of Panko to redirect events URLs to Panko API. Default autodetection by querying keystone.
pecan_debug = False (Boolean) Toggle Pecan Debug Middleware.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
Description of authorization configuration options
Configuration option = Default value Description
[service_credentials]  
auth_section = None (Unknown) Config Section from which to load plugin specific options
auth_type = None (Unknown) Authentication type to load
cafile = None (String) PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile = None (String) PEM encoded client certificate cert file
insecure = False (Boolean) Verify HTTPS connections.
interface = public (String) Type of endpoint in Identity service catalog to use for communication with OpenStack services.
keyfile = None (String) PEM encoded client certificate key file
region_name = None (String) Region name to use for OpenStack service endpoints.
timeout = None (Integer) Timeout value for http requests
Description of collector configuration options
Configuration option = Default value Description
[collector]  
batch_size = 1 (Integer) Number of notification messages to wait before dispatching them
batch_timeout = None (Integer) Number of seconds to wait before dispatching sampleswhen batch_size is not reached (None means indefinitely)
udp_address = 0.0.0.0 (String) Address to which the UDP socket is bound. Set to an empty string to disable.
udp_port = 4952 (Port number) Port to which the UDP socket is bound.
workers = 1 (Integer) Number of workers for collector service. default value is 1.
[dispatcher_file]  
backup_count = 0 (Integer) The max number of the files to keep.
file_path = None (String) Name and the location of the file to record meters.
max_bytes = 0 (Integer) The max size of the file.
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
batch_polled_samples = True (Boolean) To reduce polling agent load, samples are sent to the notification agent in a batch. To gain higher throughput at the cost of load set this to False.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
host = <your_hostname> (String) Name of this node, which must be valid in an AMQP key. Can be an opaque identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.
http_timeout = 600 (Integer) Timeout seconds for HTTP requests. Set it to None to disable timeout.
polling_namespaces = ['compute', 'central'] (Unknown) Polling namespace(s) to be used while resource polling
pollster_list = [] (Unknown) List of pollsters (or wildcard templates) to be used while polling
rootwrap_config = /etc/ceilometer/rootwrap.conf (String) Path to the rootwrap configuration file touse for running commands as root
shuffle_time_before_polling_task = 0 (Integer) To reduce large requests at same time to Nova or other components from different compute agents, shuffle start time of polling task.
[compute]  
resource_update_interval = 0 (Integer) New instances will be discovered periodically based on this option (in seconds). By default, the agent discovers instances according to pipeline polling interval. If option is greater than 0, the instance list to poll will be updated based on this option’s interval. Measurements relating to the instances will match intervals defined in pipeline.
workload_partitioning = False (Boolean) Enable work-load partitioning, allowing multiple compute agents to be run simultaneously.
[coordination]  
backend_url = None (String) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and per-host compute agent won’t do workload partitioning and will only function correctly if a single instance of that service is running.
check_watchers = 10.0 (Floating point) Number of seconds between checks to see if group membership has changed
heartbeat = 1.0 (Floating point) Number of seconds between heartbeats for distributed coordination.
max_retry_interval = 30 (Integer) Maximum number of seconds between retry to join partitioning group
retry_backoff = 1 (Integer) Retry backoff factor when retrying to connect withcoordination backend
[database]  
event_connection = None (String) The connection string used to connect to the event database. (if unset, connection is used)
event_time_to_live = -1 (Integer) Number of seconds that events are kept in the database for (<= 0 means forever).
metering_connection = None (String) The connection string used to connect to the metering database. (if unset, connection is used)
metering_time_to_live = -1 (Integer) Number of seconds that samples are kept in the database for (<= 0 means forever).
sql_expire_samples_only = False (Boolean) Indicates if expirer expires only samples. If set true, expired samples will be deleted, but residual resource and meter definition data will remain.
[meter]  
meter_definitions_cfg_file = meters.yaml (String) Configuration file for defining meter notifications.
[polling]  
partitioning_group_prefix = None (String) Work-load partitioning group prefix. Use only if you want to run multiple polling agents with different config files. For each sub-group of the agent pool with the same partitioning_group_prefix a disjoint subset of pollsters should be loaded.
[publisher]  
telemetry_secret = change this for valid signing (String) Secret value for signing messages. Set value empty if signing is not required to avoid computational overhead.
[publisher_notifier]  
event_topic = event (String) The topic that ceilometer uses for event notifications.
metering_topic = metering (String) The topic that ceilometer uses for metering notifications.
telemetry_driver = messagingv2 (String) The driver that ceilometer uses for metering notifications.
Description of logging configuration options
Configuration option = Default value Description
[DEFAULT]  
nova_http_log_debug = False (Boolean) DEPRECATED: Allow novaclient’s debug log output. (Use default_log_levels instead)
Description of HTTP dispatcher configuration options
Configuration option = Default value Description
[dispatcher_http]  
event_target = None (String) The target for event data where the http request will be sent to. If this is not set, it will default to same as Sample target.
target = (String) The target where the http request will be sent. If this is not set, no data will be posted. For example: target = http://hostname:1234/path
timeout = 5 (Integer) The max time in seconds to wait for a request to timeout.
verify_ssl = None (String) The path to a server certificate or directory if the system CAs are not used or if a self-signed certificate is used. Set to False to ignore SSL cert verification.
Description of events configuration options
Configuration option = Default value Description
[event]  
definitions_cfg_file = event_definitions.yaml (String) Configuration file for event definitions.
drop_unmatched_notifications = False (Boolean) Drop notifications if no event definition matches. (Otherwise, we convert them with just the default traits)
store_raw = [] (Multi-valued) Store the raw notification for select priority levels (info and/or error). By default, raw details are not captured.
[notification]  
ack_on_event_error = True (Boolean) Acknowledge message when event persistence fails.
workers = 1 (Integer) Number of workers for notification service, default value is 1.
workload_partitioning = False (Boolean) Enable workload partitioning, allowing multiple notification agents to be run simultaneously.
Description of exchange configuration options
Configuration option = Default value Description
[DEFAULT]  
ceilometer_control_exchange = ceilometer (String) Exchange name for ceilometer notifications.
cinder_control_exchange = cinder (String) Exchange name for Cinder notifications.
dns_control_exchange = central (String) Exchange name for DNS service notifications.
glance_control_exchange = glance (String) Exchange name for Glance notifications.
heat_control_exchange = heat (String) Exchange name for Heat notifications
http_control_exchanges = ['nova', 'glance', 'neutron', 'cinder'] (Multi-valued) Exchanges name to listen for notifications.
ironic_exchange = ironic (String) Exchange name for Ironic notifications.
keystone_control_exchange = keystone (String) Exchange name for Keystone notifications.
magnum_control_exchange = magnum (String) Exchange name for Magnum notifications.
neutron_control_exchange = neutron (String) Exchange name for Neutron notifications.
nova_control_exchange = nova (String) Exchange name for Nova notifications.
sahara_control_exchange = sahara (String) Exchange name for Data Processing notifications.
sample_source = openstack (String) Source for samples emitted on this instance.
swift_control_exchange = swift (String) Exchange name for Swift notifications.
trove_control_exchange = trove (String) Exchange name for DBaaS notifications.
Description of Hyper-V configuration options
Configuration option = Default value Description
[hyperv]  
force_volumeutils_v1 = False (Boolean) DEPRECATED: Force V1 volume utility class
Description of inspector configuration options
Configuration option = Default value Description
[DEFAULT]  
hypervisor_inspector = libvirt (String) Inspector to use for inspecting the hypervisor layer. Known inspectors are libvirt, hyperv, vmware, xenapi and powervm.
libvirt_type = kvm (String) Libvirt domain type.
libvirt_uri = (String) Override the default libvirt URI (which is dependent on libvirt_type).
Description of IPMI configuration options
Configuration option = Default value Description
[ipmi]  
node_manager_init_retry = 3 (Integer) Number of retries upon Intel Node Manager initialization failure
polling_retry = 3 (Integer) Tolerance of IPMI/NM polling failures before disable this pollster. Negative indicates retrying forever.
Description of notification configuration options
Configuration option = Default value Description
[notification]  
batch_size = 100 (Integer) Number of notification messages to wait before publishing them. Batching is advised when transformations areapplied in pipeline.
batch_timeout = 5 (Integer) Number of seconds to wait before publishing sampleswhen batch_size is not reached (None means indefinitely)
disable_non_metric_meters = True (Boolean) WARNING: Ceilometer historically offered the ability to store events as meters. This usage is NOT advised as it can flood the metering database and cause performance degradation.
messaging_urls = [] (Multi-valued) Messaging URLs to listen for notifications. Example: rabbit://user:pass@host1:port1[,user:pass@hostN:portN]/virtual_host (DEFAULT/transport_url is used if empty). This is useful when you have dedicate messaging nodes for each service, for example, all nova notifications go to rabbit-nova:5672, while all cinder notifications go to rabbit-cinder:5672.
pipeline_processing_queues = 10 (Integer) Number of queues to parallelize workload across. This value should be larger than the number of active notification agents for optimal results. WARNING: Once set, lowering this value may result in lost data.
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.
Description of RADOS gateway configuration options
Configuration option = Default value Description
[rgw_admin_credentials]  
access_key = None (String) Access key for Radosgw Admin.
secret_key = None (String) Secret key for Radosgw Admin.
Description of service types configuration options
Configuration option = Default value Description
[service_types]  
glance = image (String) Glance service type.
kwapi = energy (String) Kwapi service type.
neutron = network (String) Neutron service type.
neutron_lbaas_version = v2 (String) Neutron load balancer version.
nova = compute (String) Nova service type.
radosgw = object-store (String) Radosgw service type.
swift = object-store (String) Swift service type.
Description of storage configuration options
Configuration option = Default value Description
[storage]  
max_retries = 10 (Integer) Maximum number of connection retries during startup. Set to -1 to specify an infinite retry count.
retry_interval = 10 (Integer) Interval (in seconds) between retries of connection.
Description of swift configuration options
Configuration option = Default value Description
[DEFAULT]  
reseller_prefix = AUTH_ (String) Swift reseller prefix. Must be on par with reseller_prefix in proxy-server.conf.
Description of TripleO configuration options
Configuration option = Default value Description
[hardware]  
meter_definitions_file = snmp.yaml (String) Configuration file for defining hardware snmp meters.
readonly_user_auth_proto = None (String) SNMPd v3 authentication algorithm of all the nodes running in the cloud
readonly_user_name = ro_snmp_user (String) SNMPd user name of all nodes running in the cloud.
readonly_user_password = password (String) SNMPd v3 authentication password of all the nodes running in the cloud.
readonly_user_priv_password = None (String) SNMPd v3 encryption password of all the nodes running in the cloud.
readonly_user_priv_proto = None (String) SNMPd v3 encryption algorithm of all the nodes running in the cloud
url_scheme = snmp:// (String) URL scheme to use for hardware nodes.
Description of VMware configuration options
Configuration option = Default value Description
[vmware]  
api_retry_count = 10 (Integer) Number of times a VMware vSphere API may be retried.
ca_file = None (String) CA bundle file to use in verifying the vCenter server certificate.
host_ip = (String) IP address of the VMware vSphere host.
host_password = (String) Password of VMware vSphere.
host_port = 443 (Port number) Port of the VMware vSphere host.
host_username = (String) Username of VMware vSphere.
insecure = False (Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if “ca_file” is set.
task_poll_interval = 0.5 (Floating point) Sleep time in seconds for polling an ongoing async task.
wsdl_location = None (String) Optional vim service WSDL location e.g http://<server>/vimService.wsdl. Optional over-ride to default location for bug work-arounds.
Description of XenAPI configuration options
Configuration option = Default value Description
[xenapi]  
connection_password = None (String) Password for connection to XenServer/Xen Cloud Platform.
connection_url = None (String) URL for connection to XenServer/Xen Cloud Platform.
connection_username = root (String) Username for connection to XenServer/Xen Cloud Platform.
Description of Message service configuration options
Configuration option = Default value Description
[DEFAULT]  
zaqar_control_exchange = zaqar (String) Exchange name for Messaging service notifications.

Telemetry Alarming service configuration options

The following tables provide a comprehensive list of the Telemetry Alarming service configuration options.

Description of API configuration options
Configuration option = Default value Description
[api]  
alarm_max_actions = -1 (Integer) Maximum count of actions for each state of an alarm, non-positive number means no limit.
enable_combination_alarms = False (Boolean) DEPRECATED: Enable deprecated combination alarms. Combination alarms are deprecated. This option and combination alarms will be removed in Aodh 5.0.
paste_config = api_paste.ini (String) Configuration file for WSGI definition of API.
pecan_debug = False (Boolean) Toggle Pecan Debug Middleware.
project_alarm_quota = None (Integer) Maximum number of alarms defined for a project.
user_alarm_quota = None (Integer) Maximum number of alarms defined for a user.
workers = 1 (Integer) Number of workers for aodh API server.
[oslo_middleware]  
enable_proxy_headers_parsing = False (Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not.
max_request_body_size = 114688 (Integer) The maximum body size for each request, in bytes.
secure_proxy_ssl_header = X-Forwarded-Proto (String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy.
Description of common configuration options
Configuration option = Default value Description
[DEFAULT]  
additional_ingestion_lag = 0 (Integer) The number of seconds to extend the evaluation windows to compensate the reporting/ingestion lag.
evaluation_interval = 60 (Integer) Period of evaluation cycle, should be >= than configured pipeline interval for collection of underlying meters.
event_alarm_cache_ttl = 60 (Integer) TTL of event alarm caches, in seconds. Set to 0 to disable caching.
executor_thread_pool_size = 64 (Integer) Size of executor thread pool.
http_timeout = 600 (Integer) Timeout seconds for HTTP requests. Set it to None to disable timeout.
notifier_topic = alarming (String) The topic that aodh uses for alarm notifier messages.
record_history = True (Boolean) Record alarm change events.
rest_notifier_ca_bundle_certificate_path = None (String) SSL CA_BUNDLE certificate for REST notifier
rest_notifier_certificate_file = (String) SSL Client certificate file for REST notifier.
rest_notifier_certificate_key = (String) SSL Client private key file for REST notifier.
rest_notifier_max_retries = 0 (Integer) Number of retries for REST notifier
rest_notifier_ssl_verify = True (Boolean) Whether to verify the SSL Server certificate when calling alarm action.
[database]  
alarm_history_time_to_live = -1 (Integer) Number of seconds that alarm histories are kept in the database for (<= 0 means forever).
[evaluator]  
workers = 1 (Integer) Number of workers for evaluator service. default value is 1.
[listener]  
batch_size = 1 (Integer) Number of notification messages to wait before dispatching them.
batch_timeout = None (Integer) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely).
event_alarm_topic = alarm.all (String) The topic that aodh uses for event alarm evaluation.
workers = 1 (Integer) Number of workers for listener service. default value is 1.
[notifier]  
batch_size = 1 (Integer) Number of notification messages to wait before dispatching them.
batch_timeout = None (Integer) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely).
workers = 1 (Integer) Number of workers for notifier service. default value is 1.
[service_credentials]  
interface = public (String) Type of endpoint in Identity service catalog to use for communication with OpenStack services.
region_name = None (String) Region name to use for OpenStack service endpoints.
[service_types]  
zaqar = messaging (String) Message queue service type.
Description of coordination configuration options
Configuration option = Default value Description
[coordination]  
backend_url = None (String) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and per-host compute agent won’t do workload partitioning and will only function correctly if a single instance of that service is running.
check_watchers = 10.0 (Floating point) Number of seconds between checks to see if group membership has changed
heartbeat = 1.0 (Floating point) Number of seconds between heartbeats for distributed coordination.
max_retry_interval = 30 (Integer) Maximum number of seconds between retry to join partitioning group
retry_backoff = 1 (Integer) Retry backoff factor when retrying to connect with coordination backend
Description of Redis configuration options
Configuration option = Default value Description
[matchmaker_redis]  
check_timeout = 20000 (Integer) Time in ms to wait before the transaction is killed.
host = 127.0.0.1 (String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url
password = (String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url
port = 6379 (Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url
sentinel_group_name = oslo-messaging-zeromq (String) Redis replica set name.
sentinel_hosts = (List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url
socket_timeout = 10000 (Integer) Timeout in ms on blocking socket operations
wait_timeout = 2000 (Integer) Time in ms to wait between connection attempts.

Telemetry log files

The corresponding log file of each Telemetry service is stored in the /var/log/ceilometer/ directory of the host on which each service runs.

Log files used by Telemetry services
Log filename Service that logs to the file
agent-notification.log Telemetry service notification agent
alarm-evaluator.log Telemetry service alarm evaluation
alarm-notifier.log Telemetry service alarm notification
api.log Telemetry service API
ceilometer-dbsync.log Informational messages
central.log Telemetry service central agent
collector.log Telemetry service collection
compute.log Telemetry service compute agent

Telemetry sample configuration files

All the files in this section can be found in the /etc/ceilometer/ directory.

ceilometer.conf

The configuration for the Telemetry services and agents is found in the ceilometer.conf file.

This file must be modified after installation.

[DEFAULT]

#
# From ceilometer
#

# To reduce polling agent load, samples are sent to the notification agent in a
# batch. To gain higher throughput at the cost of load set this to False.
# (boolean value)
#batch_polled_samples = true

# To reduce large requests at same time to Nova or other components from
# different compute agents, shuffle start time of polling task. (integer value)
#shuffle_time_before_polling_task = 0

# Configuration file for WSGI definition of API. (string value)
#api_paste_config = api_paste.ini

# Polling namespace(s) to be used while resource polling (list value)
# Allowed values: compute, central, ipmi
#polling_namespaces = compute,central

# List of pollsters (or wildcard templates) to be used while polling (list
# value)
#pollster_list =

# Exchange name for Nova notifications. (string value)
#nova_control_exchange = nova

# List of metadata prefixes reserved for metering use. (list value)
#reserved_metadata_namespace = metering.

# Limit on length of reserved metadata values. (integer value)
#reserved_metadata_length = 256

# List of metadata keys reserved for metering use. And these keys are
# additional to the ones included in the namespace. (list value)
#reserved_metadata_keys =

# Inspector to use for inspecting the hypervisor layer. Known inspectors are
# libvirt, hyperv, vmware, xenapi and powervm. (string value)
#hypervisor_inspector = libvirt

# Libvirt domain type. (string value)
# Allowed values: kvm, lxc, qemu, uml, xen
#libvirt_type = kvm

# Override the default libvirt URI (which is dependent on libvirt_type).
# (string value)
#libvirt_uri =

# Dispatchers to process metering data. (multi valued)
# Deprecated group/name - [DEFAULT]/dispatcher
#meter_dispatchers = database

# Dispatchers to process event data. (multi valued)
# Deprecated group/name - [DEFAULT]/dispatcher
#event_dispatchers =

# Exchange name for Ironic notifications. (string value)
#ironic_exchange = ironic

# Exchanges name to listen for notifications. (multi valued)
#http_control_exchanges = nova
#http_control_exchanges = glance
#http_control_exchanges = neutron
#http_control_exchanges = cinder

# Exchange name for Neutron notifications. (string value)
#neutron_control_exchange = neutron

# DEPRECATED: Allow novaclient's debug log output. (Use default_log_levels
# instead) (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#nova_http_log_debug = false

# Swift reseller prefix. Must be on par with reseller_prefix in proxy-
# server.conf. (string value)
#reseller_prefix = AUTH_

# Configuration file for pipeline definition. (string value)
#pipeline_cfg_file = pipeline.yaml

# Configuration file for event pipeline definition. (string value)
#event_pipeline_cfg_file = event_pipeline.yaml

# Refresh Pipeline configuration on-the-fly. (boolean value)
#refresh_pipeline_cfg = false

# Refresh Event Pipeline configuration on-the-fly. (boolean value)
#refresh_event_pipeline_cfg = false

# Polling interval for pipeline file configuration in seconds. (integer value)
#pipeline_polling_interval = 20

# Source for samples emitted on this instance. (string value)
#sample_source = openstack

# Name of this node, which must be valid in an AMQP key. Can be an opaque
# identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.
# (string value)
#host = <your_hostname>

# Timeout seconds for HTTP requests. Set it to None to disable timeout.
# (integer value)
#http_timeout = 600

# Path to the rootwrap configuration file touse for running commands as root
# (string value)
#rootwrap_config = /etc/ceilometer/rootwrap.conf

#
# From oslo.log
#

# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false

# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true

# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>

# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S

# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>

# (Optional) The base directory used for relative log_file  paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>

# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false

# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false

# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER

# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true

# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s

# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d

# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s

# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s

# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO

# Enables or disables publication of error events. (boolean value)
#publish_errors = false

# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "

# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "

# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false

#
# From oslo.messaging
#

# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30

# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2

# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false

# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64

# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60

# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>

# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit

# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack

#
# From oslo.service.service
#

# Enable eventlet backdoor.  Acceptable values are 0, <port>, and
# <start>:<end>, where 0 results in listening on a random tcp port number;
# <port> results in listening on the specified port number (and not enabling
# backdoor if that port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range of port numbers.
# The chosen port is displayed in the service's log file. (string value)
#backdoor_port = <None>

# Enable eventlet backdoor, using the provided path as a unix socket that can
# receive connections. This option is mutually exclusive with 'backdoor_port'
# in that only one should be provided. If both are provided then the existence
# of this option overrides the usage of that option. (string value)
#backdoor_socket = <None>

# Enables or disables logging values of all registered options when starting a
# service (at DEBUG level). (boolean value)
#log_options = true

# Specify a timeout after which a gracefully shutdown server will exit. Zero
# value means endless wait. (integer value)
#graceful_shutdown_timeout = 60


[api]

#
# From ceilometer
#

# Toggle Pecan Debug Middleware. (boolean value)
#pecan_debug = false

# Default maximum number of items returned by API request. (integer value)
# Minimum value: 1
#default_api_return_limit = 100


[collector]

#
# From ceilometer
#

# Address to which the UDP socket is bound. Set to an empty string to disable.
# (string value)
#udp_address = 0.0.0.0

# Port to which the UDP socket is bound. (port value)
# Minimum value: 0
# Maximum value: 65535
#udp_port = 4952

# Number of notification messages to wait before dispatching them (integer
# value)
#batch_size = 1

# Number of seconds to wait before dispatching sampleswhen batch_size is not
# reached (None means indefinitely) (integer value)
#batch_timeout = <None>

# Number of workers for collector service. default value is 1. (integer value)
# Minimum value: 1
# Deprecated group/name - [DEFAULT]/collector_workers
#workers = 1


[compute]

#
# From ceilometer
#

# Enable work-load partitioning, allowing multiple compute agents to be run
# simultaneously. (boolean value)
#workload_partitioning = false

# New instances will be discovered periodically based on this option (in
# seconds). By default, the agent discovers instances according to pipeline
# polling interval. If option is greater than 0, the instance list to poll will
# be updated based on this option's interval. Measurements relating to the
# instances will match intervals defined in pipeline. (integer value)
# Minimum value: 0
#resource_update_interval = 0


[coordination]

#
# From ceilometer
#

# The backend URL to use for distributed coordination. If left empty, per-
# deployment central agent and per-host compute agent won't do workload
# partitioning and will only function correctly if a single instance of that
# service is running. (string value)
#backend_url = <None>

# Number of seconds between heartbeats for distributed coordination. (floating
# point value)
#heartbeat = 1.0

# Number of seconds between checks to see if group membership has changed
# (floating point value)
#check_watchers = 10.0

# Retry backoff factor when retrying to connect withcoordination backend
# (integer value)
#retry_backoff = 1

# Maximum number of seconds between retry to join partitioning group (integer
# value)
#max_retry_interval = 30


[cors]

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-Openstack-Request-Id

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-Openstack-Request-Id


[cors.subdomain]

#
# From oslo.middleware.cors
#

# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>

# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true

# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-Openstack-Request-Id

# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600

# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH

# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-Openstack-Request-Id


[database]

#
# From ceilometer
#

# Number of seconds that samples are kept in the database for (<= 0 means
# forever). (integer value)
# Deprecated group/name - [database]/time_to_live
#metering_time_to_live = -1

# Number of seconds that events are kept in the database for (<= 0 means
# forever). (integer value)
#event_time_to_live = -1

# The connection string used to connect to the metering database. (if unset,
# connection is used) (string value)
#metering_connection = <None>

# The connection string used to connect to the event database. (if unset,
# connection is used) (string value)
#event_connection = <None>

# Indicates if expirer expires only samples. If set true, expired samples will
# be deleted, but residual resource and meter definition data will remain.
# (boolean value)
#sql_expire_samples_only = false

#
# From oslo.db
#

# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite

# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true

# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy

# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>

# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>

# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL

# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600

# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1

# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5

# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10

# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10

# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50

# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0

# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false

# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>

# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false

# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1

# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true

# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10

# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20


[dispatcher_file]

#
# From ceilometer
#

# Name and the location of the file to record meters. (string value)
#file_path = <None>

# The max size of the file. (integer value)
#max_bytes = 0

# The max number of the files to keep. (integer value)
#backup_count = 0


[dispatcher_gnocchi]

#
# From ceilometer
#

# Filter out samples generated by Gnocchi service activity (boolean value)
#filter_service_activity = true

# Gnocchi project used to filter out samples generated by Gnocchi service
# activity (string value)
#filter_project = gnocchi

# The archive policy to use when the dispatcher create a new metric. (string
# value)
#archive_policy = <None>

# The Yaml file that defines mapping between samples and gnocchi
# resources/metrics (string value)
#resources_definition_file = gnocchi_resources.yaml


[event]

#
# From ceilometer
#

# Configuration file for event definitions. (string value)
#definitions_cfg_file = event_definitions.yaml

# Drop notifications if no event definition matches. (Otherwise, we convert
# them with just the default traits) (boolean value)
#drop_unmatched_notifications = false

# Store the raw notification for select priority levels (info and/or error). By
# default, raw details are not captured. (multi valued)
#store_raw =


[exchange_control]

#
# From ceilometer
#

# Exchange name for Heat notifications (string value)
#heat_control_exchange = heat

# Exchange name for Glance notifications. (string value)
#glance_control_exchange = glance

# Exchange name for Keystone notifications. (string value)
#keystone_control_exchange = keystone

# Exchange name for Cinder notifications. (string value)
#cinder_control_exchange = cinder

# Exchange name for Data Processing notifications. (string value)
#sahara_control_exchange = sahara

# Exchange name for Swift notifications. (string value)
#swift_control_exchange = swift

# Exchange name for Magnum notifications. (string value)
#magnum_control_exchange = magnum

# Exchange name for DBaaS notifications. (string value)
#trove_control_exchange = trove

# Exchange name for Messaging service notifications. (string value)
#zaqar_control_exchange = zaqar

# Exchange name for DNS service notifications. (string value)
#dns_control_exchange = central


[hardware]

#
# From ceilometer
#

# URL scheme to use for hardware nodes. (string value)
#url_scheme = snmp://

# SNMPd user name of all nodes running in the cloud. (string value)
#readonly_user_name = ro_snmp_user

# SNMPd v3 authentication password of all the nodes running in the cloud.
# (string value)
#readonly_user_password = password

# SNMPd v3 authentication algorithm of all the nodes running in the cloud
# (string value)
# Allowed values: md5, sha
#readonly_user_auth_proto = <None>

# SNMPd v3 encryption algorithm of all the nodes running in the cloud (string
# value)
# Allowed values: des, aes128, 3des, aes192, aes256
#readonly_user_priv_proto = <None>

# SNMPd v3 encryption password of all the nodes running in the cloud. (string
# value)
#readonly_user_priv_password = <None>


[ipmi]

#
# From ceilometer
#

# Number of retries upon Intel Node Manager initialization failure (integer
# value)
#node_manager_init_retry = 3

# Tolerance of IPMI/NM polling failures before disable this pollster. Negative
# indicates retrying forever. (integer value)
#polling_retry = 3


[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should  ideally be unversioned, client support in the
# wild varies.  If you're using a versioned v2 endpoint here, then this  should
# *not* be the same endpoint the service user utilizes  for validating tokens,
# because normal end users may not be  able to reach that endpoint. (string
# value)
#auth_uri = <None>

# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>

# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>

# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3

# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>

# Required if identity server requires client certificate (string value)
#certfile = <None>

# Required if identity server requires client certificate (string value)
#keyfile = <None>

# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>

# Verify HTTPS connections. (boolean value)
#insecure = false

# The region in which the identity server can be found. (string value)
#region_name = <None>

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>

# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>

# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance. Only
# valid for PKI tokens. (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None

# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>

# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5

# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>


[matchmaker_redis]

#
# From oslo.messaging
#

# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1

# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379

# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =

# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =

# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq

# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000

# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000

# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000


[meter]

#
# From ceilometer
#

# Configuration file for defining meter notifications. (string value)
#meter_definitions_cfg_file = meters.yaml


[notification]

#
# From ceilometer
#

# Number of queues to parallelize workload across. This value should be larger
# than the number of active notification agents for optimal results. WARNING:
# Once set, lowering this value may result in lost data. (integer value)
# Minimum value: 1
#pipeline_processing_queues = 10

# Acknowledge message when event persistence fails. (boolean value)
# Deprecated group/name - [collector]/ack_on_event_error
#ack_on_event_error = true

# WARNING: Ceilometer historically offered the ability to store events as
# meters. This usage is NOT advised as it can flood the metering database and
# cause performance degradation. (boolean value)
#disable_non_metric_meters = true

# Enable workload partitioning, allowing multiple notification agents to be run
# simultaneously. (boolean value)
#workload_partitioning = false

# Messaging URLs to listen for notifications. Example:
# rabbit://user:pass@host1:port1[,user:pass@hostN:portN]/virtual_host
# (DEFAULT/transport_url is used if empty). This is useful when you have
# dedicate messaging nodes for each service, for example, all nova
# notifications go to rabbit-nova:5672, while all cinder notifications go to
# rabbit-cinder:5672. (multi valued)
#messaging_urls =

# Number of notification messages to wait before publishing them. Batching is
# advised when transformations areapplied in pipeline. (integer value)
# Minimum value: 1
#batch_size = 100

# Number of seconds to wait before publishing sampleswhen batch_size is not
# reached (None means indefinitely) (integer value)
#batch_timeout = 5

# Number of workers for notification service, default value is 1. (integer
# value)
# Minimum value: 1
# Deprecated group/name - [DEFAULT]/notification_workers
#workers = 1


[oslo_concurrency]

#
# From oslo.concurrency
#

# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false

# Directory to use for lock files.  For security, the specified directory
# should only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>


[oslo_messaging_amqp]

#
# From oslo.messaging
#

# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>

# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0

# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false

# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =

# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =

# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =

# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>

# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false

# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =

# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =

# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =

# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =

# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =

# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1

# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2

# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30

# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10

# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30

# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30

# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30

# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy'   - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic'  - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic

# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive

# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast

# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast

# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc

# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify

# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast

# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast

# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast

# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>

# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>

# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200

# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100

# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100


[oslo_messaging_notifications]

#
# From oslo.messaging
#

# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =

# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>

# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications


[oslo_messaging_rabbit]

#
# From oslo.messaging
#

# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false

# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false

# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =

# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =

# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =

# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =

# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0

# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>

# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60

# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin

# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost

# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672

# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port

# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false

# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest

# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest

# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN

# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /

# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1

# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2

# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30

# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0

# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except  those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false

# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800

# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0

# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60

# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2

# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false

# Maximum number of channels to allow (integer value)
#channel_max = <None>

# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>

# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3

# Enable SSL (boolean value)
#ssl = <None>

# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>

# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25

# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25

# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25

# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single

# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30

# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0

# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30

# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600

# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60

# Persist notification messages. (boolean value)
#notification_persistence = false

# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification

# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25

# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60

# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc

# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply

# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100

# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100

# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25

# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1

# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25


[oslo_messaging_zmq]

#
# From oslo.messaging
#

# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *

# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis

# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1

# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>

# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack

# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost

# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1

# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1

# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300

# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180

# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true

# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true

# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153

# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536

# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100

# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json

# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false


[oslo_policy]

#
# From oslo.policy
#

# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json

# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default

# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched.  Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d


[polling]

#
# From ceilometer
#

# Work-load partitioning group prefix. Use only if you want to run multiple
# polling agents with different config files. For each sub-group of the agent
# pool with the same partitioning_group_prefix a disjoint subset of pollsters
# should be loaded. (string value)
# Deprecated group/name - [central]/partitioning_group_prefix
#partitioning_group_prefix = <None>


[publisher]

#
# From ceilometer
#

# Secret value for signing messages. Set value empty if signing is not required
# to avoid computational overhead. (string value)
# Deprecated group/name - [DEFAULT]/metering_secret
# Deprecated group/name - [publisher_rpc]/metering_secret
# Deprecated group/name - [publisher]/metering_secret
#telemetry_secret = change this for valid signing


[publisher_notifier]

#
# From ceilometer
#

# The topic that ceilometer uses for metering notifications. (string value)
#metering_topic = metering

# The topic that ceilometer uses for event notifications. (string value)
#event_topic = event

# The driver that ceilometer uses for metering notifications. (string value)
# Deprecated group/name - [publisher_notifier]/metering_driver
#telemetry_driver = messagingv2


[rgw_admin_credentials]

#
# From ceilometer
#

# Access key for Radosgw Admin. (string value)
#access_key = <None>

# Secret key for Radosgw Admin. (string value)
#secret_key = <None>


[service_credentials]

#
# From ceilometer
#

# Region name to use for OpenStack service endpoints. (string value)
# Deprecated group/name - [DEFAULT]/os_region_name
#region_name = <None>

# Type of endpoint in Identity service catalog to use for communication with
# OpenStack services. (string value)
# Allowed values: public, internal, admin, auth, publicURL, internalURL, adminURL
# Deprecated group/name - [service_credentials]/os_endpoint_type
#interface = public

# Authentication type to load (string value)
# Deprecated group/name - [service_credentials]/auth_plugin
#auth_type = <None>

# Config Section from which to load plugin specific options (string value)
#auth_section = <None>

# Authentication URL (string value)
#auth_url = <None>

# Domain ID to scope to (string value)
#domain_id = <None>

# Domain name to scope to (string value)
#domain_name = <None>

# Project ID to scope to (string value)
# Deprecated group/name - [service_credentials]/tenant-id
#project_id = <None>

# Project name to scope to (string value)
# Deprecated group/name - [service_credentials]/tenant-name
#project_name = <None>

# Domain ID containing project (string value)
#project_domain_id = <None>

# Domain name containing project (string value)
#project_domain_name = <None>

# Trust ID (string value)
#trust_id = <None>

# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>

# Optional domain name to use with v3 API and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>

# User id (string value)
#user_id = <None>

# Username (string value)
# Deprecated group/name - [service_credentials]/user-name
#username = <None>

# User's domain id (string value)
#user_domain_id = <None>

# User's domain name (string value)
#user_domain_name = <None>

# User's password (string value)
#password = <None>


[service_types]

#
# From ceilometer
#

# Kwapi service type. (string value)
#kwapi = energy

# Glance service type. (string value)
#glance = image

# Neutron service type. (string value)
#neutron = network

# Neutron load balancer version. (string value)
# Allowed values: v1, v2
#neutron_lbaas_version = v2

# Nova service type. (string value)
#nova = compute

# Radosgw service type. (string value)
#radosgw = object-store

# Swift service type. (string value)
#swift = object-store


[storage]

#
# From ceilometer
#

# Maximum number of connection retries during startup. Set to -1 to specify an
# infinite retry count. (integer value)
# Deprecated group/name - [database]/max_retries
#max_retries = 10

# Interval (in seconds) between retries of connection. (integer value)
# Deprecated group/name - [database]/retry_interval
#retry_interval = 10


[vmware]

#
# From ceilometer
#

# IP address of the VMware vSphere host. (string value)
#host_ip =

# Port of the VMware vSphere host. (port value)
# Minimum value: 0
# Maximum value: 65535
#host_port = 443

# Username of VMware vSphere. (string value)
#host_username =

# Password of VMware vSphere. (string value)
#host_password =

# CA bundle file to use in verifying the vCenter server certificate. (string
# value)
#ca_file = <None>

# If true, the vCenter server certificate is not verified. If false, then the
# default CA truststore is used for verification. This option is ignored if
# "ca_file" is set. (boolean value)
#insecure = false

# Number of times a VMware vSphere API may be retried. (integer value)
#api_retry_count = 10

# Sleep time in seconds for polling an ongoing async task. (floating point
# value)
#task_poll_interval = 0.5

# Optional vim service WSDL location e.g http://<server>/vimService.wsdl.
# Optional over-ride to default location for bug work-arounds. (string value)
#wsdl_location = <None>


[xenapi]

#
# From ceilometer
#

# URL for connection to XenServer/Xen Cloud Platform. (string value)
#connection_url = <None>

# Username for connection to XenServer/Xen Cloud Platform. (string value)
#connection_username = root

# Password for connection to XenServer/Xen Cloud Platform. (string value)
#connection_password = <None>
event_definitions.yaml

The event_definitions.yaml file defines how events received from other OpenStack components should be translated to Telemetry events.

This file provides a standard set of events and corresponding traits that may be of interest. This file can be modified to add and drop traits that operators may find useful.

---
- event_type: 'compute.instance.*'
  traits: &instance_traits
    tenant_id:
      fields: payload.tenant_id
    user_id:
      fields: payload.user_id
    instance_id:
      fields: payload.instance_id
    host:
      fields: publisher_id.`split(., 1, 1)`
    service:
      fields: publisher_id.`split(., 0, -1)`
    memory_mb:
      type: int
      fields: payload.memory_mb
    disk_gb:
      type: int
      fields: payload.disk_gb
    root_gb:
      type: int
      fields: payload.root_gb
    ephemeral_gb:
      type: int
      fields: payload.ephemeral_gb
    vcpus:
      type: int
      fields: payload.vcpus
    instance_type_id:
      type: int
      fields: payload.instance_type_id
    instance_type:
      fields: payload.instance_type
    state:
      fields: payload.state
    os_architecture:
      fields: payload.image_meta.'org.openstack__1__architecture'
    os_version:
      fields: payload.image_meta.'org.openstack__1__os_version'
    os_distro:
      fields: payload.image_meta.'org.openstack__1__os_distro'
    launched_at:
      type: datetime
      fields: payload.launched_at
    deleted_at:
      type: datetime
      fields: payload.deleted_at
- event_type: compute.instance.exists
  traits:
    <<: *instance_traits
    audit_period_beginning:
      type: datetime
      fields: payload.audit_period_beginning
    audit_period_ending:
      type: datetime
      fields: payload.audit_period_ending
- event_type: ['volume.exists', 'volume.create.*', 'volume.delete.*', 'volume.resize.*', 'volume.attach.*', 'volume.detach.*', 'volume.update.*', 'snapshot.exists', 'snapshot.create.*', 'snapshot.delete.*', 'snapshot.update.*']
  traits: &cinder_traits
    user_id:
      fields: payload.user_id
    project_id:
      fields: payload.tenant_id
    availability_zone:
      fields: payload.availability_zone
    display_name:
      fields: payload.display_name
    replication_status:
      fields: payload.replication_status
    status:
      fields: payload.status
    created_at:
      fields: payload.created_at
- event_type: ['volume.exists', 'volume.create.*', 'volume.delete.*', 'volume.resize.*', 'volume.attach.*', 'volume.detach.*', 'volume.update.*']
  traits:
    <<: *cinder_traits
    resource_id:
      fields: payload.volume_id
    host:
      fields: payload.host
    size:
      fields: payload.size
    type:
      fields: payload.volume_type
    replication_status:
      fields: payload.replication_status
- event_type: ['snapshot.exists', 'snapshot.create.*', 'snapshot.delete.*', 'snapshot.update.*']
  traits:
    <<: *cinder_traits
    resource_id:
      fields: payload.snapshot_id
    volume_id:
      fields: payload.volume_id
- event_type: ['image_volume_cache.*']
  traits:
    image_id:
      fields: payload.image_id
    host:
      fields: payload.host
- event_type: ['image.create', 'image.update', 'image.upload', 'image.delete']
  traits: &glance_crud
    project_id:
      fields: payload.owner
    resource_id:
      fields: payload.id
    name:
      fields: payload.name
    status:
      fields: payload.status
    created_at:
      fields: payload.created_at
    user_id:
      fields: payload.owner
    deleted_at:
      fields: payload.deleted_at
    size:
      fields: payload.size
- event_type: image.send
  traits: &glance_send
    receiver_project:
      fields: payload.receiver_tenant_id
    receiver_user:
      fields: payload.receiver_user_id
    user_id:
      fields: payload.owner_id
    image_id:
      fields: payload.image_id
    destination_ip:
      fields: payload.destination_ip
    bytes_sent:
      type: int
      fields: payload.bytes_sent
- event_type: orchestration.stack.*
  traits: &orchestration_crud
    project_id:
      fields: payload.tenant_id
    user_id:
      fields: ['_context_trustor_user_id', '_context_user_id']
    resource_id:
      fields: payload.stack_identity
- event_type: sahara.cluster.*
  traits: &sahara_crud
    project_id:
      fields: payload.project_id
    user_id:
      fields: _context_user_id
    resource_id:
      fields: payload.cluster_id
- event_type: sahara.cluster.health
  traits: &sahara_health
    <<: *sahara_crud
    verification_id:
      fields: payload.verification_id
    health_check_status:
      fields: payload.health_check_status
    health_check_name:
      fields: payload.health_check_name
    health_check_description:
      fields: payload.health_check_description
    created_at:
      type: datetime
      fields: payload.created_at
    updated_at:
      type: datetime
      fields: payload.updated_at
- event_type: ['identity.user.*', 'identity.project.*', 'identity.group.*', 'identity.role.*', 'identity.OS-TRUST:trust.*',
               'identity.region.*', 'identity.service.*', 'identity.endpoint.*', 'identity.policy.*']
  traits: &identity_crud
    resource_id:
      fields: payload.resource_info
    initiator_id:
      fields: payload.initiator.id
    project_id:
      fields: payload.initiator.project_id
    domain_id:
      fields: payload.initiator.domain_id
- event_type: identity.role_assignment.*
  traits: &identity_role_assignment
    role:
      fields: payload.role
    group:
      fields: payload.group
    domain:
      fields: payload.domain
    user:
      fields: payload.user
    project:
      fields: payload.project
- event_type: identity.authenticate
  traits: &identity_authenticate
    typeURI:
      fields: payload.typeURI
    id:
      fields: payload.id
    action:
      fields: payload.action
    eventType:
      fields: payload.eventType
    eventTime:
      fields: payload.eventTime
    outcome:
      fields: payload.outcome
    initiator_typeURI:
      fields: payload.initiator.typeURI
    initiator_id:
      fields: payload.initiator.id
    initiator_name:
      fields: payload.initiator.name
    initiator_host_agent:
      fields: payload.initiator.host.agent
    initiator_host_addr:
      fields: payload.initiator.host.address
    target_typeURI:
      fields: payload.target.typeURI
    target_id:
      fields: payload.target.id
    observer_typeURI:
      fields: payload.observer.typeURI
    observer_id:
      fields: payload.observer.id
- event_type: objectstore.http.request
  traits: &objectstore_request
    typeURI:
      fields: payload.typeURI
    id:
      fields: payload.id
    action:
      fields: payload.action
    eventType:
      fields: payload.eventType
    eventTime:
      fields: payload.eventTime
    outcome:
      fields: payload.outcome
    initiator_typeURI:
      fields: payload.initiator.typeURI
    initiator_id:
      fields: payload.initiator.id
    initiator_project_id:
      fields: payload.initiator.project_id
    target_typeURI:
      fields: payload.target.typeURI
    target_id:
      fields: payload.target.id
    target_action:
      fields: payload.target.action
    target_metadata_path:
      fields: payload.target.metadata.path
    target_metadata_version:
      fields: payload.target.metadata.version
    target_metadata_container:
      fields: payload.target.metadata.container
    target_metadata_object:
      fields: payload.target.metadata.object
    observer_id:
      fields: payload.observer.id
- event_type: ['network.*', 'subnet.*', 'port.*', 'router.*', 'floatingip.*', 'pool.*', 'vip.*', 'member.*', 'health_monitor.*', 'healthmonitor.*', 'listener.*', 'loadbalancer.*', 'firewall.*', 'firewall_policy.*', 'firewall_rule.*', 'vpnservice.*', 'ipsecpolicy.*', 'ikepolicy.*', 'ipsec_site_connection.*']
  traits: &network_traits
    user_id:
      fields: _context_user_id
    project_id:
      fields: _context_tenant_id
- event_type: network.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.network.id', 'payload.id']
- event_type: subnet.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.subnet.id', 'payload.id']
- event_type: port.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.port.id', 'payload.id']
- event_type: router.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.router.id', 'payload.id']
- event_type: floatingip.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.floatingip.id', 'payload.id']
- event_type: pool.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.pool.id', 'payload.id']
- event_type: vip.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.vip.id', 'payload.id']
- event_type: member.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.member.id', 'payload.id']
- event_type: health_monitor.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.health_monitor.id', 'payload.id']
- event_type: healthmonitor.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.healthmonitor.id', 'payload.id']
- event_type: listener.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.listener.id', 'payload.id']
- event_type: loadbalancer.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.loadbalancer.id', 'payload.id']
- event_type: firewall.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.firewall.id', 'payload.id']
- event_type: firewall_policy.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.firewall_policy.id', 'payload.id']
- event_type: firewall_rule.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.firewall_rule.id', 'payload.id']
- event_type: vpnservice.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.vpnservice.id', 'payload.id']
- event_type: ipsecpolicy.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.ipsecpolicy.id', 'payload.id']
- event_type: ikepolicy.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.ikepolicy.id', 'payload.id']
- event_type: ipsec_site_connection.*
  traits:
    <<: *network_traits
    resource_id:
      fields: ['payload.ipsec_site_connection.id', 'payload.id']
- event_type: '*http.*'
  traits: &http_audit
    project_id:
      fields: payload.initiator.project_id
    user_id:
      fields: payload.initiator.id
    typeURI:
      fields: payload.typeURI
    eventType:
      fields: payload.eventType
    action:
      fields: payload.action
    outcome:
      fields: payload.outcome
    id:
      fields: payload.id
    eventTime:
      fields: payload.eventTime
    requestPath:
      fields: payload.requestPath
    observer_id:
      fields: payload.observer.id
    target_id:
      fields: payload.target.id
    target_typeURI:
      fields: payload.target.typeURI
    target_name:
      fields: payload.target.name
    initiator_typeURI:
      fields: payload.initiator.typeURI
    initiator_id:
      fields: payload.initiator.id
    initiator_name:
      fields: payload.initiator.name
    initiator_host_address:
      fields: payload.initiator.host.address
- event_type: '*http.response'
  traits:
    <<: *http_audit
    reason_code:
      fields: payload.reason.reasonCode
- event_type: ['dns.domain.create', 'dns.domain.update', 'dns.domain.delete']
  traits: &dns_domain_traits
    status:
      fields: payload.status
    retry:
      fields: payload.retry
    description:
      fields: payload.description
    expire:
      fields: payload.expire
    email:
      fields: payload.email
    ttl:
      fields: payload.ttl
    action:
      fields: payload.action
    name:
      fields: payload.name
    resource_id:
      fields: payload.id
    created_at:
      fields: payload.created_at
    updated_at:
      fields: payload.updated_at
    version:
      fields: payload.version
    parent_domain_id:
      fields: parent_domain_id
    serial:
      fields: payload.serial
- event_type: dns.domain.exists
  traits:
    <<: *dns_domain_traits
    audit_period_beginning:
      type: datetime
      fields: payload.audit_period_beginning
    audit_period_ending:
      type: datetime
      fields: payload.audit_period_ending
- event_type: trove.*
  traits: &trove_base_traits
    state:
      fields: payload.state_description
    instance_type:
      fields: payload.instance_type
    user_id:
      fields: payload.user_id
    resource_id:
      fields: payload.instance_id
    instance_type_id:
      fields: payload.instance_type_id
    launched_at:
      type: datetime
      fields: payload.launched_at
    instance_name:
      fields: payload.instance_name
    state:
      fields: payload.state
    nova_instance_id:
      fields: payload.nova_instance_id
    service_id:
      fields: payload.service_id
    created_at:
      type: datetime
      fields: payload.created_at
    region:
      fields: payload.region
- event_type: ['trove.instance.create', 'trove.instance.modify_volume', 'trove.instance.modify_flavor', 'trove.instance.delete']
  traits: &trove_common_traits
    name:
      fields: payload.name
    availability_zone:
      fields: payload.availability_zone
    instance_size:
      type: int
      fields: payload.instance_size
    volume_size:
      type: int
      fields: payload.volume_size
    nova_volume_id:
      fields: payload.nova_volume_id
- event_type: trove.instance.create
  traits:
    <<: [*trove_base_traits, *trove_common_traits]
- event_type: trove.instance.modify_volume
  traits:
    <<: [*trove_base_traits, *trove_common_traits]
    old_volume_size:
      type: int
      fields: payload.old_volume_size
    modify_at:
      type: datetime
      fields: payload.modify_at
- event_type: trove.instance.modify_flavor
  traits:
    <<: [*trove_base_traits, *trove_common_traits]
    old_instance_size:
      type: int
      fields: payload.old_instance_size
    modify_at:
      type: datetime
      fields: payload.modify_at
- event_type: trove.instance.delete
  traits:
    <<: [*trove_base_traits, *trove_common_traits]
    deleted_at:
      type: datetime
      fields: payload.deleted_at
- event_type: trove.instance.exists
  traits:
    <<: *trove_base_traits
    display_name:
      fields: payload.display_name
    audit_period_beginning:
      type: datetime
      fields: payload.audit_period_beginning
    audit_period_ending:
      type: datetime
      fields: payload.audit_period_ending
- event_type: profiler.*
  traits:
    project:
      fields: payload.project
    service:
      fields: payload.service
    name:
      fields: payload.name
    base_id:
      fields: payload.base_id
    trace_id:
      fields: payload.trace_id
    parent_id:
      fields: payload.parent_id
    timestamp:
      fields: payload.timestamp
    host:
      fields: payload.info.host
    path:
      fields: payload.info.request.path
    query:
      fields: payload.info.request.query
    method:
      fields: payload.info.request.method
    scheme:
      fields: payload.info.request.scheme
    db.statement:
      fields: payload.info.db.statement
    db.params:
      fields: payload.info.db.params
- event_type: 'magnum.bay.*'
  traits: &magnum_bay_crud
    id:
      fields: payload.id
    typeURI:
      fields: payload.typeURI
    eventType:
      fields: payload.eventType
    eventTime:
      fields: payload.eventTime
    action:
      fields: payload.action
    outcome:
      fields: payload.outcome
    initiator_id:
      fields: payload.initiator.id
    initiator_typeURI:
      fields: payload.initiator.typeURI
    initiator_name:
      fields: payload.initiator.name
    initiator_host_agent:
      fields: payload.initiator.host.agent
    initiator_host_address:
      fields: payload.initiator.host.address
    target_id:
      fields: payload.target.id
    target_typeURI:
      fields: payload.target.typeURI
    observer_id:
      fields: payload.observer.id
    observer_typeURI:
      fields: payload.observer.typeURI
pipeline.yaml

Pipelines describe a coupling between sources of samples and the corresponding sinks for transformation and publication of the data. They are defined in the pipeline.yaml file.

This file can be modified to adjust polling intervals and the samples generated by the Telemetry module.

---
sources:
    - name: meter_source
      interval: 600
      meters:
          - "*"
      sinks:
          - meter_sink
    - name: cpu_source
      interval: 600
      meters:
          - "cpu"
      sinks:
          - cpu_sink
          - cpu_delta_sink
    - name: disk_source
      interval: 600
      meters:
          - "disk.read.bytes"
          - "disk.read.requests"
          - "disk.write.bytes"
          - "disk.write.requests"
          - "disk.device.read.bytes"
          - "disk.device.read.requests"
          - "disk.device.write.bytes"
          - "disk.device.write.requests"
      sinks:
          - disk_sink
    - name: network_source
      interval: 600
      meters:
          - "network.incoming.bytes"
          - "network.incoming.packets"
          - "network.outgoing.bytes"
          - "network.outgoing.packets"
      sinks:
          - network_sink
sinks:
    - name: meter_sink
      transformers:
      publishers:
          - notifier://
    - name: cpu_sink
      transformers:
          - name: "rate_of_change"
            parameters:
                target:
                    name: "cpu_util"
                    unit: "%"
                    type: "gauge"
                    scale: "100.0 / (10**9 * (resource_metadata.cpu_number or 1))"
      publishers:
          - notifier://
    - name: cpu_delta_sink
      transformers:
          - name: "delta"
            parameters:
                target:
                    name: "cpu.delta"
                growth_only: True
      publishers:
          - notifier://
    - name: disk_sink
      transformers:
          - name: "rate_of_change"
            parameters:
                source:
                    map_from:
                        name: "(disk\\.device|disk)\\.(read|write)\\.(bytes|requests)"
                        unit: "(B|request)"
                target:
                    map_to:
                        name: "\\1.\\2.\\3.rate"
                        unit: "\\1/s"
                    type: "gauge"
      publishers:
          - notifier://
    - name: network_sink
      transformers:
          - name: "rate_of_change"
            parameters:
                source:
                   map_from:
                       name: "network\\.(incoming|outgoing)\\.(bytes|packets)"
                       unit: "(B|packet)"
                target:
                    map_to:
                        name: "network.\\1.\\2.rate"
                        unit: "\\1/s"
                    type: "gauge"
      publishers:
          - notifier://
event_pipeline.yaml

Event pipelines describe a coupling between notification event_types and the corresponding sinks for publication of the event data. They are defined in the event_pipeline.yaml file.

This file can be modified to adjust which notifications to capture and where to publish the events.

---
sources:
    - name: event_source
      events:
          - "*"
      sinks:
          - event_sink
sinks:
    - name: event_sink
      transformers:
      publishers:
          - notifier://
policy.json

The policy.json file defines additional access controls that apply to the Telemetry service.

{
    "context_is_admin": "role:admin",
    "segregation": "rule:context_is_admin",

    "telemetry:get_samples": "",
    "telemetry:get_sample": "",
    "telemetry:query_sample": "",
    "telemetry:create_samples": "",

    "telemetry:compute_statistics": "",
    "telemetry:get_meters": "",

    "telemetry:get_resource": "",
    "telemetry:get_resources": "",

    "telemetry:events:index": "",
    "telemetry:events:show": ""
}

New, updated, and deprecated options in Newton for Alarming

New options
Option = default value (Type) Help string
[DEFAULT] additional_ingestion_lag = 0 (IntOpt) The number of seconds to extend the evaluation windows to compensate the reporting/ingestion lag.
[DEFAULT] rest_notifier_ca_bundle_certificate_path = None (StrOpt) SSL CA_BUNDLE certificate for REST notifier
[api] alarm_max_actions = -1 (IntOpt) Maximum count of actions for each state of an alarm, non-positive number means no limit.
[api] enable_combination_alarms = False (BoolOpt) Enable deprecated combination alarms.
[api] project_alarm_quota = None (IntOpt) Maximum number of alarms defined for a project.
[api] user_alarm_quota = None (IntOpt) Maximum number of alarms defined for a user.
[evaluator] workers = 1 (IntOpt) Number of workers for evaluator service. default value is 1.
[listener] batch_size = 1 (IntOpt) Number of notification messages to wait before dispatching them.
[listener] batch_timeout = None (IntOpt) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely).
[listener] event_alarm_topic = alarm.all (StrOpt) The topic that aodh uses for event alarm evaluation.
[listener] workers = 1 (IntOpt) Number of workers for listener service. default value is 1.
[notifier] batch_size = 1 (IntOpt) Number of notification messages to wait before dispatching them.
[notifier] batch_timeout = None (IntOpt) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely).
[notifier] workers = 1 (IntOpt) Number of workers for notifier service. default value is 1.
[service_types] zaqar = messaging (StrOpt) Message queue service type.
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None

New, updated, and deprecated options in Newton for Telemetry

New options
Option = default value (Type) Help string
[api] panko_is_enabled = None (BoolOpt) Set True to redirect events URLs to Panko. Default autodetection by querying keystone.
[api] panko_url = None (StrOpt) The endpoint of Panko to redirect events URLs to Panko API. Default autodetection by querying keystone.
[coordination] max_retry_interval = 30 (IntOpt) Maximum number of seconds between retry to join partitioning group
[coordination] retry_backoff = 1 (IntOpt) Retry backoff factor when retrying to connect withcoordination backend
[database] sql_expire_samples_only = False (BoolOpt) Indicates if expirer expires only samples. If set true, expired samples will be deleted, but residual resource and meter definition data will remain.
[dispatcher_http] verify_ssl = None (StrOpt) The path to a server certificate or directory if the system CAs are not used or if a self-signed certificate is used. Set to False to ignore SSL cert verification.
[hardware] readonly_user_auth_proto = None (StrOpt) SNMPd v3 authentication algorithm of all the nodes running in the cloud
[hardware] readonly_user_priv_password = None (StrOpt) SNMPd v3 encryption password of all the nodes running in the cloud.
[hardware] readonly_user_priv_proto = None (StrOpt) SNMPd v3 encryption algorithm of all the nodes running in the cloud
New default values
Option Previous default value New default value
[DEFAULT] event_dispatchers ['database'] []
[DEFAULT] host localhost <your_hostname>
[notification] batch_size 1 100
[notification] batch_timeout None 5
Deprecated options
Deprecated option New Option
[DEFAULT] use_syslog None
[hyperv] force_volumeutils_v1 None

The Telemetry service collects measurements within OpenStack. Its various agents and services are configured in the /etc/ceilometer/ceilometer.conf file.

To install Telemetry, see the Newton Installation Tutorials and Guides for your distribution.

Note

The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.

Appendix

The policy.json file

Each OpenStack service, Identity, Compute, Networking and so on, has its own role-based access policies. They determine which user can access which objects in which way, and are defined in the service’s policy.json file.

Whenever an API call to an OpenStack service is made, the service’s policy engine uses the appropriate policy definitions to determine if the call can be accepted. Any changes to policy.json are effective immediately, which allows new policies to be implemented while the service is running.

A policy.json file is a text file in JSON (Javascript Object Notation) format. Each policy is defined by a one-line statement in the form "<target>" : "<rule>".

The policy target, also named “action”, represents an API call like “start an instance” or “attach a volume”.

Action names are usually qualified. Example: OpenStack Compute features API calls to list instances, volumes and networks. In /etc/nova/policy.json, these APIs are represented by compute:get_all, volume:get_all and network:get_all, respectively.

The mapping between API calls and actions is not generally documented.

The policy rule determines under which circumstances the API call is permitted. Usually this involves the user who makes the call (hereafter named the “API user”) and often the object on which the API call operates. A typical rule checks if the API user is the object’s owner.

Warning

Modifying the policy

While recipes for editing policy.json files are found on blogs, modifying the policy can have unexpected side effects and is not encouraged.

Examples

A simple rule might look like this:

"compute:get_all" : ""

The target is "compute:get_all", the “list all instances” API of the Compute service. The rule is an empty string meaning “always”. This policy allows anybody to list instances.

You can also decline permission to use an API:

"compute:shelve": "!"

The exclamation mark stands for “never” or “nobody”, which effectively disables the Compute API “shelve an instance”.

Many APIs can only be called by admin users. This can be expressed by the rule "role:admin". The following policy ensures that only administrators can create new users in the Identity database:

"identity:create_user" : "role:admin"

You can limit APIs to any role. For example, the Orchestration service defines a role named heat_stack_user. Whoever has this role isn’t allowed to create stacks:

"stacks:create": "not role:heat_stack_user"

This rule makes use of the boolean operator not. More complex rules can be built using operators and, or and parentheses.

You can define aliases for rules:

"deny_stack_user": "not role:heat_stack_user"

The policy engine understands that "deny_stack_user" is not an API and consequently interprets it as an alias. The stack creation policy above can then be written as:

"stacks:create": "rule:deny_stack_user"

This is taken verbatim from /etc/heat/policy.json.

Rules can compare API attributes to object attributes. For example:

"os_compute_api:servers:start" : "project_id:%(project_id)s"

states that only the owner of an instance can start it up. The project_id string before the colon is an API attribute, namely the project ID of the API user. It is compared with the project ID of the object (in this case, an instance); more precisely, it is compared with the project_id field of that object in the database. If the two values are equal, permission is granted.

An admin user always has permission to call APIs. This is how /etc/keystone/policy.json makes this policy explicit:

"admin_required": "role:admin or is_admin:1",
"owner" : "user_id:%(user_id)s",
"admin_or_owner": "rule:admin_required or rule:owner",
"identity:change_password": "rule:admin_or_owner"

The first line defines an alias for “user is an admin user”. The is_admin flag is only used when setting up the Identity service for the first time. It indicates that the user has admin privileges granted by the service token (--os-token parameter of the keystone command line client).

The second line creates an alias for “user owns the object” by comparing the API’s user ID with the object’s user ID.

Line 3 defines a third alias admin_or_owner, combining the two first aliases with the Boolean operator or.

Line 4 sets up the policy that a password can only be modified by its owner or an admin user.

As a final example, let’s examine a more complex rule:

"identity:ec2_delete_credential": "rule:admin_required or
             (rule:owner and user_id:%(target.credential.user_id)s)"

This rule determines who can use the Identity API “delete EC2 credential”. Here, boolean operators and parentheses combine three simpler rules. admin_required and owner are the same aliases as in the previous example. user_id:%(target.credential.user_id)s compares the API user with the user ID of the credential object associated with the target.

Syntax

A policy.json file consists of policies and aliases of the form target:rule or alias:definition, separated by commas and enclosed in curly braces:

 {
       "alias 1" : "definition 1",
       "alias 2" : "definition 2",
       ...
       "target 1" : "rule 1",
       "target 2" : "rule 2",
       ....
}

Targets are APIs and are written "service:API" or simply "API". For example, "compute:create" or "add_image".

Rules determine whether the API call is allowed.

Rules can be:

  • always true. The action is always permitted. This can be written as "" (empty string), [], or "@".
  • always false. The action is never permitted. Written as "!".
  • a special check
  • a comparison of two values
  • boolean expressions based on simpler rules

Special checks are

  • <role>:<role name>, a test whether the API credentials contain this role.
  • <rule>:<rule name>, the definition of an alias.
  • http:<target URL>, which delegates the check to a remote server. The API is authorized when the server returns True.

Developers can define additional special checks.

Two values are compared in the following way:

"value1 : value2"

Possible values are

  • constants: Strings, numbers, true, false
  • API attributes
  • target object attributes
  • the flag is_admin

API attributes can be project_id, user_id or domain_id.

Target object attributes are fields from the object description in the database. For example in the case of the "compute:start" API, the object is the instance to be started. The policy for starting instances could use the %(project_id)s attribute, that is the project that owns the instance. The trailing s indicates this is a string.

is_admin indicates that administrative privileges are granted via the admin token mechanism (the --os-token option of the keystone command). The admin token allows initialisation of the identity database before the admin role exists.

The alias construct exists for convenience. An alias is short name for a complex or hard to understand rule. It is defined in the same way as a policy:

alias name : alias definition

Once an alias is defined, use the rule keyword to use it in a policy rule.

Older syntax

You may encounter older policy.json files that feature a different syntax, where JavaScript arrays are used instead of boolean operators. For example, the EC2 credentials rule above would have been written as follows:

"identity:ec2_delete_credential": [ [ "rule:admin_required ],
             [ "rule:owner", "user_id:%(target.credential.user_id)s)" ] ]

The rule is an array of arrays. The innermost arrays are or’ed together, whereas elements inside the innermost arrays are and’ed.

While the old syntax is still supported, we recommend using the newer, more intuitive syntax.

Firewalls and default ports

On some deployments, such as ones where restrictive firewalls are in place, you might need to manually configure a firewall to permit OpenStack service traffic.

To manually configure a firewall, you must permit traffic through the ports that each OpenStack service uses. This table lists the default ports that each OpenStack service uses:

Default ports that OpenStack components use
OpenStack service Default ports Port type
Application Catalog (murano) 8082  
Block Storage (cinder) 8776 publicurl and adminurl
Compute (nova) endpoints 8774 publicurl and adminurl
Compute API (nova-api) 8773, 8775  
Compute ports for access to virtual machine consoles 5900-5999  
Compute VNC proxy for browsers ( openstack-nova-novncproxy) 6080  
Compute VNC proxy for traditional VNC clients (openstack-nova-xvpvncproxy) 6081  
Proxy port for HTML5 console used by Compute service 6082  
Data processing service (sahara) endpoint 8386 publicurl and adminurl
Identity service (keystone) administrative endpoint 35357 adminurl
Identity service public endpoint 5000 publicurl
Image service (glance) API 9292 publicurl and adminurl
Image service registry 9191  
Networking (neutron) 9696 publicurl and adminurl
Object Storage (swift) 6000, 6001, 6002  
Orchestration (heat) endpoint 8004 publicurl and adminurl
Orchestration AWS CloudFormation-compatible API (openstack-heat-api-cfn) 8000  
Orchestration AWS CloudWatch-compatible API (openstack-heat-api-cloudwatch) 8003  
Telemetry (ceilometer) 8777 publicurl and adminurl

To function properly, some OpenStack components depend on other, non-OpenStack services. For example, the OpenStack dashboard uses HTTP for non-secure communication. In this case, you must configure the firewall to allow traffic to and from HTTP.

This table lists the ports that other OpenStack components use:

Default ports that secondary services related to OpenStack components use
Service Default port Used by
HTTP 80 OpenStack dashboard (Horizon) when it is not configured to use secure access.
HTTP alternate 8080 OpenStack Object Storage (swift) service.
HTTPS 443 Any OpenStack service that is enabled for SSL, especially secure-access dashboard.
rsync 873 OpenStack Object Storage. Required.
iSCSI target 3260 OpenStack Block Storage. Required.
MySQL database service 3306 Most OpenStack components.
Message Broker (AMQP traffic) 5672 OpenStack Block Storage, Networking, Orchestration, and Compute.

On some deployments, the default port used by a service may fall within the defined local port range of a host. To check a host’s local port range:

$ sysctl net.ipv4.ip_local_port_range

If a service’s default port falls within this range, run the following program to check if the port has already been assigned to another application:

$ lsof -i :PORT

Configure the service to use a different port if the default port is already being used by another application.

Community support

The following resources are available to help you run and use OpenStack. The OpenStack community constantly improves and adds to the main features of OpenStack, but if you have any questions, do not hesitate to ask. Use the following resources to get OpenStack support and troubleshoot your installations.

Documentation

For the available OpenStack documentation, see docs.openstack.org.

To provide feedback on documentation, join and use the openstack-docs@lists.openstack.org mailing list at OpenStack Documentation Mailing List, or report a bug.

The following books explain how to install an OpenStack cloud and its associated components:

The following books explain how to configure and run an OpenStack cloud:

The following books explain how to use the OpenStack dashboard and command-line clients:

The following documentation provides reference and guidance information for the OpenStack APIs:

The following guide provides how to contribute to OpenStack documentation:

ask.openstack.org

During the set up or testing of OpenStack, you might have questions about how a specific task is completed or be in a situation where a feature does not work correctly. Use the ask.openstack.org site to ask questions and get answers. When you visit the https://ask.openstack.org site, scan the recently asked questions to see whether your question has already been answered. If not, ask a new question. Be sure to give a clear, concise summary in the title and provide as much detail as possible in the description. Paste in your command output or stack traces, links to screen shots, and any other information which might be useful.

OpenStack mailing lists

A great way to get answers and insights is to post your question or problematic scenario to the OpenStack mailing list. You can learn from and help others who might have similar issues. To subscribe or view the archives, go to http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack. If you are interested in the other mailing lists for specific projects or development, refer to Mailing Lists.

The OpenStack wiki

The OpenStack wiki contains a broad range of topics but some of the information can be difficult to find or is a few pages deep. Fortunately, the wiki search feature enables you to search by title or content. If you search for specific information, such as about networking or OpenStack Compute, you can find a large amount of relevant material. More is being added all the time, so be sure to check back often. You can find the search box in the upper-right corner of any OpenStack wiki page.

The Launchpad Bugs area

The OpenStack community values your set up and testing efforts and wants your feedback. To log a bug, you must sign up for a Launchpad account at https://launchpad.net/+login. You can view existing bugs and report bugs in the Launchpad Bugs area. Use the search feature to determine whether the bug has already been reported or already been fixed. If it still seems like your bug is unreported, fill out a bug report.

Some tips:

  • Give a clear, concise summary.
  • Provide as much detail as possible in the description. Paste in your command output or stack traces, links to screen shots, and any other information which might be useful.
  • Be sure to include the software and package versions that you are using, especially if you are using a development branch, such as, "Kilo release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208.
  • Any deployment-specific information is helpful, such as whether you are using Ubuntu 14.04 or are performing a multi-node installation.

The following Launchpad Bugs areas are available:

The OpenStack IRC channel

The OpenStack community lives in the #openstack IRC channel on the Freenode network. You can hang out, ask questions, or get immediate feedback for urgent and pressing issues. To install an IRC client or use a browser-based client, go to https://webchat.freenode.net/. You can also use Colloquy (Mac OS X, http://colloquy.info/), mIRC (Windows, http://www.mirc.com/), or XChat (Linux). When you are in the IRC channel and want to share code or command output, the generally accepted method is to use a Paste Bin. The OpenStack project has one at http://paste.openstack.org. Just paste your longer amounts of text or logs in the web form and you get a URL that you can paste into the channel. The OpenStack IRC channel is #openstack on irc.freenode.net. You can find a list of all OpenStack IRC channels at https://wiki.openstack.org/wiki/IRC.

Documentation feedback

To provide feedback on documentation, join and use the openstack-docs@lists.openstack.org mailing list at OpenStack Documentation Mailing List, or report a bug.

OpenStack distribution packages

The following Linux distributions provide community-supported packages for OpenStack:

Glossary

Glossary

This glossary offers a list of terms and definitions to define a vocabulary for OpenStack-related concepts.

To add to OpenStack glossary, clone the openstack/openstack-manuals repository and update the source file doc/common/glossary.rst through the OpenStack contribution process.

0-9

6to4
A mechanism that allows IPv6 packets to be transmitted over an IPv4 network, providing a strategy for migrating to IPv6.

A

absolute limit
Impassable limits for guest VMs. Settings include total RAM size, maximum number of vCPUs, and maximum disk size.
access control list (ACL)
A list of permissions attached to an object. An ACL specifies which users or system processes have access to objects. It also defines which operations can be performed on specified objects. Each entry in a typical ACL specifies a subject and an operation. For instance, the ACL entry (Alice, delete) for a file gives Alice permission to delete the file.
access key
Alternative term for an Amazon EC2 access key. See EC2 access key.
account
The Object Storage context of an account. Do not confuse with a user account from an authentication service, such as Active Directory, /etc/passwd, OpenLDAP, OpenStack Identity, and so on.
account auditor
Checks for missing replicas and incorrect or corrupted objects in a specified Object Storage account by running queries against the back-end SQLite database.
account database
A SQLite database that contains Object Storage accounts and related metadata and that the accounts server accesses.
account reaper
An Object Storage worker that scans for and deletes account databases and that the account server has marked for deletion.
account server
Lists containers in Object Storage and stores container information in the account database.
account service
An Object Storage component that provides account services such as list, create, modify, and audit. Do not confuse with OpenStack Identity service, OpenLDAP, or similar user-account services.
accounting
The Compute service provides accounting information through the event notification and system usage data facilities.
active/active configuration
In a high-availability setup with an active/active configuration, several systems share the load together and if one fails, the load is distributed to the remaining systems.
Active Directory
Authentication and identity service by Microsoft, based on LDAP. Supported in OpenStack.
active/passive configuration
In a high-availability setup with an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed.
address pool
A group of fixed and/or floating IP addresses that are assigned to a project and can be used by or assigned to the VM instances in a project.
admin API
A subset of API calls that are accessible to authorized administrators and are generally not accessible to end users or the public Internet. They can exist as a separate service (keystone) or can be a subset of another API (nova).
administrator
The person responsible for installing, configuring, and managing an OpenStack cloud.
admin server
In the context of the Identity service, the worker process that provides access to the admin API.
Advanced Message Queuing Protocol (AMQP)
The open standard messaging protocol used by OpenStack components for intra-service communications, provided by RabbitMQ, Qpid, or ZeroMQ.
Advanced RISC Machine (ARM)
Lower power consumption CPU often found in mobile and embedded devices. Supported by OpenStack.
alert
The Compute service can send alerts through its notification system, which includes a facility to create custom notification drivers. Alerts can be sent to and displayed on the horizon dashboard.
allocate
The process of taking a floating IP address from the address pool so it can be associated with a fixed IP on a guest VM instance.
Amazon Kernel Image (AKI)
Both a VM container format and disk format. Supported by Image service.
Amazon Machine Image (AMI)
Both a VM container format and disk format. Supported by Image service.
Amazon Ramdisk Image (ARI)
Both a VM container format and disk format. Supported by Image service.
Anvil
A project that ports the shell script-based project named DevStack to Python.
aodh
Part of the OpenStack Telemetry service; provides alarming functionality.
Apache
The Apache Software Foundation supports the Apache community of open-source software projects. These projects provide software products for the public good.
Apache License 2.0
All OpenStack core projects are provided under the terms of the Apache License 2.0 license.
Apache Web Server
The most common web server software currently used on the Internet.
API endpoint
The daemon, worker, or service that a client communicates with to access an API. API endpoints can provide any number of services, such as authentication, sales data, performance meters, Compute VM commands, census data, and so on.
API extension
Custom modules that extend some OpenStack core APIs.
API extension plug-in
Alternative term for a Networking plug-in or Networking API extension.
API key
Alternative term for an API token.
API server
Any node running a daemon or worker that provides an API endpoint.
API token
Passed to API requests and used by OpenStack to verify that the client is authorized to run the requested operation.
API version
In OpenStack, the API version for a project is part of the URL. For example, example.com/nova/v1/foobar.
applet
A Java program that can be embedded into a web page.
Application Catalog service (murano)
The project that provides an application catalog service so that users can compose and deploy composite environments on an application abstraction level while managing the application lifecycle.
Application Programming Interface (API)
A collection of specifications used to access a service, application, or program. Includes service calls, required parameters for each call, and the expected return values.
application server
A piece of software that makes available another piece of software over a network.
Application Service Provider (ASP)
Companies that rent specialized applications that help businesses and organizations provide additional services with lower cost.
Address Resolution Protocol (ARP)
The protocol by which layer-3 IP addresses are resolved into layer-2 link local addresses.
arptables
Tool used for maintaining Address Resolution Protocol packet filter rules in the Linux kernel firewall modules. Used along with iptables, ebtables, and ip6tables in Compute to provide firewall services for VMs.
associate
The process associating a Compute floating IP address with a fixed IP address.
Asynchronous JavaScript and XML (AJAX)
A group of interrelated web development techniques used on the client-side to create asynchronous web applications. Used extensively in horizon.
ATA over Ethernet (AoE)
A disk storage protocol tunneled within Ethernet.
attach
The process of connecting a VIF or vNIC to a L2 network in Networking. In the context of Compute, this process connects a storage volume to an instance.
attachment (network)
Association of an interface ID to a logical port. Plugs an interface into a port.
auditing
Provided in Compute through the system usage data facility.
auditor
A worker process that verifies the integrity of Object Storage objects, containers, and accounts. Auditors is the collective term for the Object Storage account auditor, container auditor, and object auditor.
Austin
The code name for the initial release of OpenStack. The first design summit took place in Austin, Texas, US.
auth node
Alternative term for an Object Storage authorization node.
authentication
The process that confirms that the user, process, or client is really who they say they are through private key, secret token, password, fingerprint, or similar method.
authentication token
A string of text provided to the client after authentication. Must be provided by the user or process in subsequent requests to the API endpoint.
AuthN
The Identity service component that provides authentication services.
authorization
The act of verifying that a user, process, or client is authorized to perform an action.
authorization node
An Object Storage node that provides authorization services.
AuthZ
The Identity component that provides high-level authorization services.
Auto ACK
Configuration setting within RabbitMQ that enables or disables message acknowledgment. Enabled by default.
auto declare
A Compute RabbitMQ setting that determines whether a message exchange is automatically created when the program starts.
availability zone
An Amazon EC2 concept of an isolated area that is used for fault tolerance. Do not confuse with an OpenStack Compute zone or cell.
AWS CloudFormation template
AWS CloudFormation allows Amazon Web Services (AWS) users to create and manage a collection of related resources. The Orchestration service supports a CloudFormation-compatible format (CFN).

B

back end
Interactions and processes that are obfuscated from the user, such as Compute volume mount, data transmission to an iSCSI target by a daemon, or Object Storage object integrity checks.
back-end catalog
The storage method used by the Identity service catalog service to store and retrieve information about API endpoints that are available to the client. Examples include an SQL database, LDAP database, or KVS back end.
back-end store
The persistent data store used to save and retrieve information for a service, such as lists of Object Storage objects, current state of guest VMs, lists of user names, and so on. Also, the method that the Image service uses to get and store VM images. Options include Object Storage, locally mounted file system, RADOS block devices, VMware datastore, and HTTP.
backup restore and disaster recovery as a service
The OpenStack project that provides integrated tooling for backing up, restoring, and recovering file systems, instances, or database backups. The project name is freezer.
bandwidth
The amount of available data used by communication resources, such as the Internet. Represents the amount of data that is used to download things or the amount of data available to download.
barbican
Code name of the Key Manager service.
bare
An Image service container format that indicates that no container exists for the VM image.
Bare Metal service (ironic)
The OpenStack service that provides a service and associated libraries capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner.
base image
An OpenStack-provided image.
Bell-LaPadula model
A security model that focuses on data confidentiality and controlled access to classified information. This model divides the entities into subjects and objects. The clearance of a subject is compared to the classification of the object to determine if the subject is authorized for the specific access mode. The clearance or classification scheme is expressed in terms of a lattice.
Benchmark service (rally)
OpenStack project that provides a framework for performance analysis and benchmarking of individual OpenStack components as well as full production OpenStack cloud deployments.
Bexar
A grouped release of projects related to OpenStack that came out in February of 2011. It included only Compute (nova) and Object Storage (swift). Bexar is the code name for the second release of OpenStack. The design summit took place in San Antonio, Texas, US, which is the county seat for Bexar county.
binary
Information that consists solely of ones and zeroes, which is the language of computers.
bit
A bit is a single digit number that is in base of 2 (either a zero or one). Bandwidth usage is measured in bits per second.
bits per second (BPS)
The universal measurement of how quickly data is transferred from place to place.
block device
A device that moves data in the form of blocks. These device nodes interface the devices, such as hard disks, CD-ROM drives, flash drives, and other addressable regions of memory.
block migration
A method of VM live migration used by KVM to evacuate instances from one host to another with very little downtime during a user-initiated switchover. Does not require shared storage. Supported by Compute.
Block Storage service (cinder)
The OpenStack service that implement services and libraries to provide on-demand, self-service access to Block Storage resources via abstraction and automation on top of other block storage devices.
Block Storage API
An API on a separate endpoint for attaching, detaching, and creating block storage for compute VMs.
BMC (Baseboard Management Controller)
The intelligence in the IPMI architecture, which is a specialized micro-controller that is embedded on the motherboard of a computer and acts as a server. Manages the interface between system management software and platform hardware.
bootable disk image
A type of VM image that exists as a single, bootable file.
Bootstrap Protocol (BOOTP)
A network protocol used by a network client to obtain an IP address from a configuration server. Provided in Compute through the dnsmasq daemon when using either the FlatDHCP manager or VLAN manager network manager.
Border Gateway Protocol (BGP)
The Border Gateway Protocol is a dynamic routing protocol that connects autonomous systems. Considered the backbone of the Internet, this protocol connects disparate networks to form a larger network.
browser
Any client software that enables a computer or device to access the Internet.
builder file
Contains configuration information that Object Storage uses to reconfigure a ring or to re-create it from scratch after a serious failure.
bursting
The practice of utilizing a secondary environment to elastically build instances on-demand when the primary environment is resource constrained.
button class
A group of related button types within horizon. Buttons to start, stop, and suspend VMs are in one class. Buttons to associate and disassociate floating IP addresses are in another class, and so on.
byte
Set of bits that make up a single character; there are usually 8 bits to a byte.

C

cache pruner
A program that keeps the Image service VM image cache at or below its configured maximum size.
Cactus
An OpenStack grouped release of projects that came out in the spring of 2011. It included Compute (nova), Object Storage (swift), and the Image service (glance). Cactus is a city in Texas, US and is the code name for the third release of OpenStack. When OpenStack releases went from three to six months long, the code name of the release changed to match a geography nearest the previous summit.
CALL
One of the RPC primitives used by the OpenStack message queue software. Sends a message and waits for a response.
capability
Defines resources for a cell, including CPU, storage, and networking. Can apply to the specific services within a cell or a whole cell.
capacity cache
A Compute back-end database table that contains the current workload, amount of free RAM, and number of VMs running on each host. Used to determine on which host a VM starts.
capacity updater
A notification driver that monitors VM instances and updates the capacity cache as needed.
CAST
One of the RPC primitives used by the OpenStack message queue software. Sends a message and does not wait for a response.
catalog
A list of API endpoints that are available to a user after authentication with the Identity service.
catalog service
An Identity service that lists API endpoints that are available to a user after authentication with the Identity service.
ceilometer
Part of the OpenStack Telemetry service; gathers and stores metrics from other OpenStack services.
cell
Provides logical partitioning of Compute resources in a child and parent relationship. Requests are passed from parent cells to child cells if the parent cannot provide the requested resource.
cell forwarding
A Compute option that enables parent cells to pass resource requests to child cells if the parent cannot provide the requested resource.
cell manager
The Compute component that contains a list of the current capabilities of each host within the cell and routes requests as appropriate.
CentOS
A Linux distribution that is compatible with OpenStack.
Ceph
Massively scalable distributed storage system that consists of an object store, block store, and POSIX-compatible distributed file system. Compatible with OpenStack.
CephFS
The POSIX-compliant file system provided by Ceph.
certificate authority (CA)
In cryptography, an entity that issues digital certificates. The digital certificate certifies the ownership of a public key by the named subject of the certificate. This enables others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the certified public key. In this model of trust relationships, a CA is a trusted third party for both the subject (owner) of the certificate and the party relying upon the certificate. CAs are characteristic of many public key infrastructure (PKI) schemes. In OpenStack, a simple certificate authority is provided by Compute for cloudpipe VPNs and VM image decryption.
Challenge-Handshake Authentication Protocol (CHAP)
An iSCSI authentication method supported by Compute.
chance scheduler
A scheduling method used by Compute that randomly chooses an available host from the pool.
changes since
A Compute API parameter that downloads changes to the requested item since your last request, instead of downloading a new, fresh set of data and comparing it against the old data.
Chef
An operating system configuration management tool supporting OpenStack deployments.
child cell
If a requested resource such as CPU time, disk storage, or memory is not available in the parent cell, the request is forwarded to its associated child cells. If the child cell can fulfill the request, it does. Otherwise, it attempts to pass the request to any of its children.
cinder
Codename for Block Storage service.
CirrOS
A minimal Linux distribution designed for use as a test image on clouds such as OpenStack.
Cisco neutron plug-in
A Networking plug-in for Cisco devices and technologies, including UCS and Nexus.
cloud architect
A person who plans, designs, and oversees the creation of clouds.
Cloud Auditing Data Federation (CADF)
Cloud Auditing Data Federation (CADF) is a specification for audit event data. CADF is supported by OpenStack Identity.
cloud computing
A model that enables access to a shared pool of configurable computing resources, such as networks, servers, storage, applications, and services, that can be rapidly provisioned and released with minimal management effort or service provider interaction.
cloud controller
Collection of Compute components that represent the global state of the cloud; talks to services, such as Identity authentication, Object Storage, and node/storage workers through a queue.
cloud controller node
A node that runs network, volume, API, scheduler, and image services. Each service may be broken out into separate nodes for scalability or availability.
Cloud Data Management Interface (CDMI)
SINA standard that defines a RESTful API for managing objects in the cloud, currently unsupported in OpenStack.
Cloud Infrastructure Management Interface (CIMI)
An in-progress specification for cloud management. Currently unsupported in OpenStack.
cloud-init
A package commonly installed in VM images that performs initialization of an instance after boot using information that it retrieves from the metadata service, such as the SSH public key and user data.
cloudadmin
One of the default roles in the Compute RBAC system. Grants complete system access.
Cloudbase-Init
A Windows project providing guest initialization features, similar to cloud-init.
cloudpipe
A compute service that creates VPNs on a per-project basis.
cloudpipe image
A pre-made VM image that serves as a cloudpipe server. Essentially, OpenVPN running on Linux.
Clustering service
The OpenStack project that implements clustering services and libraries for the management of groups of homogeneous objects exposed by other OpenStack services. The project name of Clustering service is senlin.
congress
OpenStack project that provides the Governance service.
command filter
Lists allowed commands within the Compute rootwrap facility.
Common Internet File System (CIFS)
A file sharing protocol. It is a public or open variation of the original Server Message Block (SMB) protocol developed and used by Microsoft. Like the SMB protocol, CIFS runs at a higher level and uses the TCP/IP protocol.
community project
A project that is not officially endorsed by the OpenStack Foundation. If the project is successful enough, it might be elevated to an incubated project and then to a core project, or it might be merged with the main code trunk.
compression
Reducing the size of files by special encoding, the file can be decompressed again to its original content. OpenStack supports compression at the Linux file system level but does not support compression for things such as Object Storage objects or Image service VM images.
Compute service (nova)
The OpenStack core project that implements services and associated libraries to provide massively-scalable, on-demand, self-service access to compute resources, including bare metal, virtual machines, and containers.
Compute API (Nova API)
The nova-api daemon provides access to nova services. Can communicate with other APIs, such as the Amazon EC2 API.
compute controller
The Compute component that chooses suitable hosts on which to start VM instances.
compute host
Physical host dedicated to running compute nodes.
compute node
A node that runs the nova-compute daemon that manages VM instances that provide a wide range of services, such as web applications and analytics.
Compute service
Name for the Compute component that manages VMs.
compute worker
The Compute component that runs on each compute node and manages the VM instance lifecycle, including run, reboot, terminate, attach/detach volumes, and so on. Provided by the nova-compute daemon.
concatenated object
A set of segment objects that Object Storage combines and sends to the client.
conductor
In Compute, conductor is the process that proxies database requests from the compute process. Using conductor improves security because compute nodes do not need direct access to the database.
consistency window
The amount of time it takes for a new Object Storage object to become accessible to all clients.
console log
Contains the output from a Linux VM console in Compute.
container
Organizes and stores objects in Object Storage. Similar to the concept of a Linux directory but cannot be nested. Alternative term for an Image service container format.
container auditor
Checks for missing replicas or incorrect objects in specified Object Storage containers through queries to the SQLite back-end database.
container database
A SQLite database that stores Object Storage containers and container metadata. The container server accesses this database.
container format
A wrapper used by the Image service that contains a VM image and its associated metadata, such as machine state, OS disk size, and so on.
Container Infrastructure Management service
To provide a set of services for provisioning, scaling, and managing container orchestration engines.
container server
An Object Storage server that manages containers.
container service
The Object Storage component that provides container services, such as create, delete, list, and so on.
content delivery network (CDN)
A content delivery network is a specialized network that is used to distribute content to clients, typically located close to the client for increased performance.
controller node
Alternative term for a cloud controller node.
core API
Depending on context, the core API is either the OpenStack API or the main API of a specific core project, such as Compute, Networking, Image service, and so on.
core service
An official OpenStack service defined as core by DefCore Committee. Currently, consists of Block Storage service (cinder), Compute service (nova), Identity service (keystone), Image service (glance), Networking service (neutron), and Object Storage service (swift).
cost
Under the Compute distributed scheduler, this is calculated by looking at the capabilities of each host relative to the flavor of the VM instance being requested.
credentials
Data that is only known to or accessible by a user and used to verify that the user is who he says he is. Credentials are presented to the server during authentication. Examples include a password, secret key, digital certificate, and fingerprint.
Cross-Origin Resource Sharing (CORS)
A mechanism that allows many resources (for example, fonts, JavaScript) on a web page to be requested from another domain outside the domain from which the resource originated. In particular, JavaScript’s AJAX calls can use the XMLHttpRequest mechanism.
Crowbar
An open source community project by Dell that aims to provide all necessary services to quickly deploy clouds.
current workload
An element of the Compute capacity cache that is calculated based on the number of build, snapshot, migrate, and resize operations currently in progress on a given host.
customer
Alternative term for project.
customization module
A user-created Python module that is loaded by horizon to change the look and feel of the dashboard.

D

daemon
A process that runs in the background and waits for requests. May or may not listen on a TCP or UDP port. Do not confuse with a worker.
Dashboard (horizon)
OpenStack project which provides an extensible, unified, web-based user interface for all OpenStack services.
data encryption
Both Image service and Compute support encrypted virtual machine (VM) images (but not instances). In-transit data encryption is supported in OpenStack using technologies such as HTTPS, SSL, TLS, and SSH. Object Storage does not support object encryption at the application level but may support storage that uses disk encryption.
database ID
A unique ID given to each replica of an Object Storage database.
database replicator
An Object Storage component that copies changes in the account, container, and object databases to other nodes.
Database service (trove)
An integrated project that provides scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines.
Data loss prevention (DLP) software
Software programs used to protect sensitive information and prevent it from leaking outside a network boundary through the detection and denying of the data transportation.
Data Processing service (sahara)
OpenStack project that provides a scalable data-processing stack and associated management interfaces.
data store
A database engine supported by the Database service.
deallocate
The process of removing the association between a floating IP address and a fixed IP address. Once this association is removed, the floating IP returns to the address pool.
Debian
A Linux distribution that is compatible with OpenStack.
deduplication
The process of finding duplicate data at the disk block, file, and/or object level to minimize storage use—currently unsupported within OpenStack.
default panel
The default panel that is displayed when a user accesses the horizon dashboard.
default project
New users are assigned to this project if no project is specified when a user is created.
default token
An Identity service token that is not associated with a specific project and is exchanged for a scoped token.
delayed delete
An option within Image service so that an image is deleted after a predefined number of seconds instead of immediately.
delivery mode
Setting for the Compute RabbitMQ message delivery mode; can be set to either transient or persistent.
denial of service (DoS)
Denial of service (DoS) is a short form for denial-of-service attack. This is a malicious attempt to prevent legitimate users from using a service.
deprecated auth
An option within Compute that enables administrators to create and manage users through the nova-manage command as opposed to using the Identity service.
designate
Code name for the DNS service project for OpenStack.
Desktop-as-a-Service
A platform that provides a suite of desktop environments that users access to receive a desktop experience from any location. This may provide general use, development, or even homogeneous testing environments.
developer
One of the default roles in the Compute RBAC system and the default role assigned to a new user.
device ID
Maps Object Storage partitions to physical storage devices.
device weight
Distributes partitions proportionately across Object Storage devices based on the storage capacity of each device.
DevStack
Community project that uses shell scripts to quickly build complete OpenStack development environments.
DHCP agent
OpenStack Networking agent that provides DHCP services for virtual networks.
Diablo
A grouped release of projects related to OpenStack that came out in the fall of 2011, the fourth release of OpenStack. It included Compute (nova 2011.3), Object Storage (swift 1.4.3), and the Image service (glance). Diablo is the code name for the fourth release of OpenStack. The design summit took place in the Bay Area near Santa Clara, California, US and Diablo is a nearby city.
direct consumer
An element of the Compute RabbitMQ that comes to life when a RPC call is executed. It connects to a direct exchange through a unique exclusive queue, sends the message, and terminates.
direct exchange
A routing table that is created within the Compute RabbitMQ during RPC calls; one is created for each RPC call that is invoked.
direct publisher
Element of RabbitMQ that provides a response to an incoming MQ message.
disassociate
The process of removing the association between a floating IP address and fixed IP and thus returning the floating IP address to the address pool.
Discretionary Access Control (DAC)
Governs the ability of subjects to access objects, while enabling users to make policy decisions and assign security attributes. The traditional UNIX system of users, groups, and read-write-execute permissions is an example of DAC.
disk encryption
The ability to encrypt data at the file system, disk partition, or whole-disk level. Supported within Compute VMs.
disk format
The underlying format that a disk image for a VM is stored as within the Image service back-end store. For example, AMI, ISO, QCOW2, VMDK, and so on.
dispersion
In Object Storage, tools to test and ensure dispersion of objects and containers to ensure fault tolerance.
distributed virtual router (DVR)
Mechanism for highly available multi-host routing when using OpenStack Networking (neutron).
Django
A web framework used extensively in horizon.
DNS record
A record that specifies information about a particular domain and belongs to the domain.
DNS service
OpenStack project that provides scalable, on demand, self service access to authoritative DNS services, in a technology-agnostic manner. The code name for the project is designate.
dnsmasq
Daemon that provides DNS, DHCP, BOOTP, and TFTP services for virtual networks.
domain
An Identity API v3 entity. Represents a collection of projects, groups and users that defines administrative boundaries for managing OpenStack Identity entities. On the Internet, separates a website from other sites. Often, the domain name has two or more parts that are separated by dots. For example, yahoo.com, usa.gov, harvard.edu, or mail.yahoo.com. Also, a domain is an entity or container of all DNS-related information containing one or more records.
Domain Name System (DNS)
A system by which Internet domain name-to-address and address-to-name resolutions are determined. DNS helps navigate the Internet by translating the IP address into an address that is easier to remember. For example, translating 111.111.111.1 into www.yahoo.com. All domains and their components, such as mail servers, utilize DNS to resolve to the appropriate locations. DNS servers are usually set up in a master-slave relationship such that failure of the master invokes the slave. DNS servers might also be clustered or replicated such that changes made to one DNS server are automatically propagated to other active servers. In Compute, the support that enables associating DNS entries with floating IP addresses, nodes, or cells so that hostnames are consistent across reboots.
download
The transfer of data, usually in the form of files, from one computer to another.
durable exchange
The Compute RabbitMQ message exchange that remains active when the server restarts.
durable queue
A Compute RabbitMQ message queue that remains active when the server restarts.
Dynamic Host Configuration Protocol (DHCP)
A network protocol that configures devices that are connected to a network so that they can communicate on that network by using the Internet Protocol (IP). The protocol is implemented in a client-server model where DHCP clients request configuration data, such as an IP address, a default route, and one or more DNS server addresses from a DHCP server. A method to automatically configure networking for a host at boot time. Provided by both Networking and Compute.
Dynamic HyperText Markup Language (DHTML)
Pages that use HTML, JavaScript, and Cascading Style Sheets to enable users to interact with a web page or show simple animation.

E

east-west traffic
Network traffic between servers in the same cloud or data center. See also north-south traffic.
EBS boot volume
An Amazon EBS storage volume that contains a bootable VM image, currently unsupported in OpenStack.
ebtables
Filtering tool for a Linux bridging firewall, enabling filtering of network traffic passing through a Linux bridge. Used in Compute along with arptables, iptables, and ip6tables to ensure isolation of network communications.
EC2
The Amazon commercial compute product, similar to Compute.
EC2 access key
Used along with an EC2 secret key to access the Compute EC2 API.
EC2 API
OpenStack supports accessing the Amazon EC2 API through Compute.
EC2 Compatibility API
A Compute component that enables OpenStack to communicate with Amazon EC2.
EC2 secret key
Used along with an EC2 access key when communicating with the Compute EC2 API; used to digitally sign each request.
Elastic Block Storage (EBS)
The Amazon commercial block storage product.
encryption
OpenStack supports encryption technologies such as HTTPS, SSH, SSL, TLS, digital certificates, and data encryption.
endpoint
See API endpoint.
endpoint registry
Alternative term for an Identity service catalog.
encapsulation
The practice of placing one packet type within another for the purposes of abstracting or securing data. Examples include GRE, MPLS, or IPsec.
endpoint template
A list of URL and port number endpoints that indicate where a service, such as Object Storage, Compute, Identity, and so on, can be accessed.
entity
Any piece of hardware or software that wants to connect to the network services provided by Networking, the network connectivity service. An entity can make use of Networking by implementing a VIF.
ephemeral image
A VM image that does not save changes made to its volumes and reverts them to their original state after the instance is terminated.
ephemeral volume
Volume that does not save the changes made to it and reverts to its original state when the current user relinquishes control.
Essex
A grouped release of projects related to OpenStack that came out in April 2012, the fifth release of OpenStack. It included Compute (nova 2012.1), Object Storage (swift 1.4.8), Image (glance), Identity (keystone), and Dashboard (horizon). Essex is the code name for the fifth release of OpenStack. The design summit took place in Boston, Massachusetts, US and Essex is a nearby city.
ESXi
An OpenStack-supported hypervisor.
ETag
MD5 hash of an object within Object Storage, used to ensure data integrity.
euca2ools
A collection of command-line tools for administering VMs; most are compatible with OpenStack.
Eucalyptus Kernel Image (EKI)
Used along with an ERI to create an EMI.
Eucalyptus Machine Image (EMI)
VM image container format supported by Image service.
Eucalyptus Ramdisk Image (ERI)
Used along with an EKI to create an EMI.
evacuate
The process of migrating one or all virtual machine (VM) instances from one host to another, compatible with both shared storage live migration and block migration.
exchange
Alternative term for a RabbitMQ message exchange.
exchange type
A routing algorithm in the Compute RabbitMQ.
exclusive queue
Connected to by a direct consumer in RabbitMQ—Compute, the message can be consumed only by the current connection.
extended attributes (xattr)
File system option that enables storage of additional information beyond owner, group, permissions, modification time, and so on. The underlying Object Storage file system must support extended attributes.
extension
Alternative term for an API extension or plug-in. In the context of Identity service, this is a call that is specific to the implementation, such as adding support for OpenID.
external network
A network segment typically used for instance Internet access.
extra specs
Specifies additional requirements when Compute determines where to start a new instance. Examples include a minimum amount of network bandwidth or a GPU.

F

FakeLDAP
An easy method to create a local LDAP directory for testing Identity and Compute. Requires Redis.
fan-out exchange
Within RabbitMQ and Compute, it is the messaging interface that is used by the scheduler service to receive capability messages from the compute, volume, and network nodes.
federated identity
A method to establish trusts between identity providers and the OpenStack cloud.
Fedora
A Linux distribution compatible with OpenStack.
Fibre Channel
Storage protocol similar in concept to TCP/IP; encapsulates SCSI commands and data.
Fibre Channel over Ethernet (FCoE)
The fibre channel protocol tunneled within Ethernet.
fill-first scheduler
The Compute scheduling method that attempts to fill a host with VMs rather than starting new VMs on a variety of hosts.
filter
The step in the Compute scheduling process when hosts that cannot run VMs are eliminated and not chosen.
firewall
Used to restrict communications between hosts and/or nodes, implemented in Compute using iptables, arptables, ip6tables, and ebtables.
FireWall-as-a-Service (FWaaS)
A Networking extension that provides perimeter firewall functionality.
fixed IP address
An IP address that is associated with the same instance each time that instance boots, is generally not accessible to end users or the public Internet, and is used for management of the instance.
Flat Manager
The Compute component that gives IP addresses to authorized nodes and assumes DHCP, DNS, and routing configuration and services are provided by something else.
flat mode injection
A Compute networking method where the OS network configuration information is injected into the VM image before the instance starts.
flat network
Virtual network type that uses neither VLANs nor tunnels to segregate project traffic. Each flat network typically requires a separate underlying physical interface defined by bridge mappings. However, a flat network can contain multiple subnets.
FlatDHCP Manager
The Compute component that provides dnsmasq (DHCP, DNS, BOOTP, TFTP) and radvd (routing) services.
flavor
Alternative term for a VM instance type.
flavor ID
UUID for each Compute or Image service VM flavor or instance type.
floating IP address
An IP address that a project can associate with a VM so that the instance has the same public IP address each time that it boots. You create a pool of floating IP addresses and assign them to instances as they are launched to maintain a consistent IP address for maintaining DNS assignment.
Folsom
A grouped release of projects related to OpenStack that came out in the fall of 2012, the sixth release of OpenStack. It includes Compute (nova), Object Storage (swift), Identity (keystone), Networking (neutron), Image service (glance), and Volumes or Block Storage (cinder). Folsom is the code name for the sixth release of OpenStack. The design summit took place in San Francisco, California, US and Folsom is a nearby city.
FormPost
Object Storage middleware that uploads (posts) an image through a form on a web page.
freezer
OpenStack project that provides backup restore and disaster recovery as a service.
front end
The point where a user interacts with a service; can be an API endpoint, the horizon dashboard, or a command-line tool.

G

gateway
An IP address, typically assigned to a router, that passes network traffic between different networks.
generic receive offload (GRO)
Feature of certain network interface drivers that combines many smaller received packets into a large packet before delivery to the kernel IP stack.
generic routing encapsulation (GRE)
Protocol that encapsulates a wide variety of network layer protocols inside virtual point-to-point links.
glance
A core project that provides the OpenStack Image service.
glance API server
Processes client requests for VMs, updates Image service metadata on the registry server, and communicates with the store adapter to upload VM images from the back-end store.
glance registry
Alternative term for the Image service image registry.
global endpoint template
The Identity service endpoint template that contains services available to all projects.
GlusterFS
A file system designed to aggregate NAS hosts, compatible with OpenStack.
gnocchi
Part of the OpenStack Telemetry service; provides an indexer and time-series database.
golden image
A method of operating system installation where a finalized disk image is created and then used by all nodes without modification.
Governance service
OpenStack project to provide Governance-as-a-Service across any collection of cloud services in order to monitor, enforce, and audit policy over dynamic infrastructure. The code name for the project is congress.
Graphic Interchange Format (GIF)
A type of image file that is commonly used for animated images on web pages.
Graphics Processing Unit (GPU)
Choosing a host based on the existence of a GPU is currently unsupported in OpenStack.
Green Threads
The cooperative threading model used by Python; reduces race conditions and only context switches when specific library calls are made. Each OpenStack service is its own thread.
Grizzly
The code name for the seventh release of OpenStack. The design summit took place in San Diego, California, US and Grizzly is an element of the state flag of California.
Group
An Identity v3 API entity. Represents a collection of users that is owned by a specific domain.
guest OS
An operating system instance running under the control of a hypervisor.

H

Hadoop
Apache Hadoop is an open source software framework that supports data-intensive distributed applications.
Hadoop Distributed File System (HDFS)
A distributed, highly fault-tolerant file system designed to run on low-cost commodity hardware.
handover
An object state in Object Storage where a new replica of the object is automatically created due to a drive failure.
HAProxy
Provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers.
hard reboot
A type of reboot where a physical or virtual power button is pressed as opposed to a graceful, proper shutdown of the operating system.
Havana
The code name for the eighth release of OpenStack. The design summit took place in Portland, Oregon, US and Havana is an unincorporated community in Oregon.
heat
Codename for the Orchestration service.
Heat Orchestration Template (HOT)
Heat input in the format native to OpenStack.
health monitor
Determines whether back-end members of a VIP pool can process a request. A pool can have several health monitors associated with it. When a pool has several monitors associated with it, all monitors check each member of the pool. All monitors must declare a member to be healthy for it to stay active.
high availability (HA)
A high availability system design approach and associated service implementation ensures that a prearranged level of operational performance will be met during a contractual measurement period. High availability systems seek to minimize system downtime and data loss.
horizon
Codename for the Dashboard.
horizon plug-in
A plug-in for the OpenStack dashboard (horizon).
host
A physical computer, not a VM instance (node).
host aggregate
A method to further subdivide availability zones into hypervisor pools, a collection of common hosts.
Host Bus Adapter (HBA)
Device plugged into a PCI slot, such as a fibre channel or network card.
hybrid cloud
A hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect colocation, managed and/or dedicated services with cloud resources.
Hyper-V
One of the hypervisors supported by OpenStack.
Any kind of text that contains a link to some other site, commonly found in documents where clicking on a word or words opens up a different website.
Hypertext Transfer Protocol (HTTP)
An application protocol for distributed, collaborative, hypermedia information systems. It is the foundation of data communication for the World Wide Web. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext.
Hypertext Transfer Protocol Secure (HTTPS)
An encrypted communications protocol for secure communication over a computer network, with especially wide deployment on the Internet. Technically, it is not a protocol in and of itself; rather, it is the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the TLS or SSL protocol, thus adding the security capabilities of TLS or SSL to standard HTTP communications. Most OpenStack API endpoints and many inter-component communications support HTTPS communication.
hypervisor
Software that arbitrates and controls VM access to the actual underlying hardware.
hypervisor pool
A collection of hypervisors grouped together through host aggregates.

I

Icehouse
The code name for the ninth release of OpenStack. The design summit took place in Hong Kong and Ice House is a street in that city.
ID number
Unique numeric ID associated with each user in Identity, conceptually similar to a Linux or LDAP UID.
Identity API
Alternative term for the Identity service API.
Identity back end
The source used by Identity service to retrieve user information; an OpenLDAP server, for example.
identity provider
A directory service, which allows users to login with a user name and password. It is a typical source of authentication tokens.
Identity service (keystone)
The project that facilitates API client authentication, service discovery, distributed multi-tenant authorization, and auditing. It provides a central directory of users mapped to the OpenStack services they can access. It also registers endpoints for OpenStack services and acts as a common authentication system.
Identity service API
The API used to access the OpenStack Identity service provided through keystone.
image
A collection of files for a specific operating system (OS) that you use to create or rebuild a server. OpenStack provides pre-built images. You can also create custom images, or snapshots, from servers that you have launched. Custom images can be used for data backups or as “gold” images for additional servers.
Image API
The Image service API endpoint for management of VM images.
image cache
Used by Image service to obtain images on the local host rather than re-downloading them from the image server each time one is requested.
image ID
Combination of a URI and UUID used to access Image service VM images through the image API.
image membership
A list of projects that can access a given VM image within Image service.
image owner
The project who owns an Image service virtual machine image.
image registry
A list of VM images that are available through Image service.
Image service
An OpenStack core project that provides discovery, registration, and delivery services for disk and server images. The project name of the Image service is glance.
Image service API
Alternative name for the glance image API.
image status
The current status of a VM image in Image service, not to be confused with the status of a running instance.
image store
The back-end store used by Image service to store VM images, options include Object Storage, locally mounted file system, RADOS block devices, VMware datastore, or HTTP.
image UUID
UUID used by Image service to uniquely identify each VM image.
incubated project
A community project may be elevated to this status and is then promoted to a core project.
Infrastructure-as-a-Service (IaaS)
IaaS is a provisioning model in which an organization outsources physical components of a data center, such as storage, hardware, servers, and networking components. A service provider owns the equipment and is responsible for housing, operating and maintaining it. The client typically pays on a per-use basis. IaaS is a model for providing cloud services.
ingress filtering
The process of filtering incoming network traffic. Supported by Compute.
INI format
The OpenStack configuration files use an INI format to describe options and their values. It consists of sections and key value pairs.
injection
The process of putting a file into a virtual machine image before the instance is started.
Input/Output Operations Per Second (IOPS)
IOPS are a common performance measurement used to benchmark computer storage devices like hard disk drives, solid state drives, and storage area networks.
instance
A running VM, or a VM in a known state such as suspended, that can be used like a hardware server.
instance ID
Alternative term for instance UUID.
instance state
The current state of a guest VM image.
instance tunnels network
A network segment used for instance traffic tunnels between compute nodes and the network node.
instance type
Describes the parameters of the various virtual machine images that are available to users; includes parameters such as CPU, storage, and memory. Alternative term for flavor.
instance type ID
Alternative term for a flavor ID.
instance UUID
Unique ID assigned to each guest VM instance.
Intelligent Platform Management Interface (IPMI)
IPMI is a standardized computer system interface used by system administrators for out-of-band management of computer systems and monitoring of their operation. In layman’s terms, it is a way to manage a computer using a direct network connection, whether it is turned on or not; connecting to the hardware rather than an operating system or login shell.
interface
A physical or virtual device that provides connectivity to another device or medium.
interface ID
Unique ID for a Networking VIF or vNIC in the form of a UUID.
Internet Control Message Protocol (ICMP)
A network protocol used by network devices for control messages. For example, ping uses ICMP to test connectivity.
Internet protocol (IP)
Principal communications protocol in the internet protocol suite for relaying datagrams across network boundaries.
Internet Service Provider (ISP)
Any business that provides Internet access to individuals or businesses.
Internet Small Computer System Interface (iSCSI)
Storage protocol that encapsulates SCSI frames for transport over IP networks. Supported by Compute, Object Storage, and Image service.
ironic
Codename for the Bare Metal service.
IP address
Number that is unique to every computer system on the Internet. Two versions of the Internet Protocol (IP) are in use for addresses: IPv4 and IPv6.
IP Address Management (IPAM)
The process of automating IP address allocation, deallocation, and management. Currently provided by Compute, melange, and Networking.
ip6tables
Tool used to set up, maintain, and inspect the tables of IPv6 packet filter rules in the Linux kernel. In OpenStack Compute, ip6tables is used along with arptables, ebtables, and iptables to create firewalls for both nodes and VMs.
ipset
Extension to iptables that allows creation of firewall rules that match entire “sets” of IP addresses simultaneously. These sets reside in indexed data structures to increase efficiency, particularly on systems with a large quantity of rules.
iptables
Used along with arptables and ebtables, iptables create firewalls in Compute. iptables are the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores. Different kernel modules and programs are currently used for different protocols: iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames. Requires root privilege to manipulate.
iSCSI Qualified Name (IQN)
IQN is the format most commonly used for iSCSI names, which uniquely identify nodes in an iSCSI network. All IQNs follow the pattern iqn.yyyy-mm.domain:identifier, where ‘yyyy-mm’ is the year and month in which the domain was registered, ‘domain’ is the reversed domain name of the issuing organization, and ‘identifier’ is an optional string which makes each IQN under the same domain unique. For example, ‘iqn.2015-10.org.openstack.408ae959bce1’.
ISO9660
One of the VM image disk formats supported by Image service.
itsec
A default role in the Compute RBAC system that can quarantine an instance in any project.

J

Java
A programming language that is used to create systems that involve more than one computer by way of a network.
JavaScript
A scripting language that is used to build web pages.
JavaScript Object Notation (JSON)
One of the supported response formats in OpenStack.
Jenkins
Tool used to run jobs automatically for OpenStack development.
jumbo frame
Feature in modern Ethernet networks that supports frames up to approximately 9000 bytes.
Juno
The code name for the tenth release of OpenStack. The design summit took place in Atlanta, Georgia, US and Juno is an unincorporated community in Georgia.

K

Kerberos
A network authentication protocol which works on the basis of tickets. Kerberos allows nodes communication over a non-secure network, and allows nodes to prove their identity to one another in a secure manner.
kernel-based VM (KVM)
An OpenStack-supported hypervisor. KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V), ARM, IBM Power, and IBM zSeries. It consists of a loadable kernel module, that provides the core virtualization infrastructure and a processor specific module.
Key Manager service (barbican)
The project that produces a secret storage and generation system capable of providing key management for services wishing to enable encryption features.
keystone
Codename of the Identity service.
Kickstart
A tool to automate system configuration and installation on Red Hat, Fedora, and CentOS-based Linux distributions.
Kilo
The code name for the eleventh release of OpenStack. The design summit took place in Paris, France. Due to delays in the name selection, the release was known only as K. Because k is the unit symbol for kilo and the reference artifact is stored near Paris in the Pavillon de Breteuil in Sèvres, the community chose Kilo as the release name.

L

large object
An object within Object Storage that is larger than 5 GB.
Launchpad
The collaboration site for OpenStack.
Layer-2 network
Term used in the OSI network architecture for the data link layer. The data link layer is responsible for media access control, flow control and detecting and possibly correcting errors that may occur in the physical layer.
Layer-3 network
Term used in the OSI network architecture for the network layer. The network layer is responsible for packet forwarding including routing from one node to another.
Layer-2 (L2) agent
OpenStack Networking agent that provides layer-2 connectivity for virtual networks.
Layer-3 (L3) agent
OpenStack Networking agent that provides layer-3 (routing) services for virtual networks.
Liberty
The code name for the twelfth release of OpenStack. The design summit took place in Vancouver, Canada and Liberty is the name of a village in the Canadian province of Saskatchewan.
libvirt
Virtualization API library used by OpenStack to interact with many of its supported hypervisors.
Lightweight Directory Access Protocol (LDAP)
An application protocol for accessing and maintaining distributed directory information services over an IP network.
Linux bridge
Software that enables multiple VMs to share a single physical NIC within Compute.
Linux Bridge neutron plug-in
Enables a Linux bridge to understand a Networking port, interface attachment, and other abstractions.
Linux containers (LXC)
An OpenStack-supported hypervisor.
live migration
The ability within Compute to move running virtual machine instances from one host to another with only a small service interruption during switchover.
load balancer
A load balancer is a logical device that belongs to a cloud account. It is used to distribute workloads between multiple back-end systems or services, based on the criteria defined as part of its configuration.
load balancing
The process of spreading client requests between two or more nodes to improve performance and availability.
Load-Balancer-as-a-Service (LBaaS)
Enables Networking to distribute incoming requests evenly between designated instances.
Logical Volume Manager (LVM)
Provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes.

M

magnum
Code name for the OpenStack project that provides the Containers Service.
management API
Alternative term for an admin API.
management network
A network segment used for administration, not accessible to the public Internet.
manager
Logical groupings of related code, such as the Block Storage volume manager or network manager.
manifest
Used to track segments of a large object within Object Storage.
manifest object
A special Object Storage object that contains the manifest for a large object.
manila
Codename for OpenStack Shared File Systems service.
manila-share
Responsible for managing Shared File System Service devices, specifically the back-end devices.
maximum transmission unit (MTU)
Maximum frame or packet size for a particular network medium. Typically 1500 bytes for Ethernet networks.
mechanism driver
A driver for the Modular Layer 2 (ML2) neutron plug-in that provides layer-2 connectivity for virtual instances. A single OpenStack installation can use multiple mechanism drivers.
melange
Project name for OpenStack Network Information Service. To be merged with Networking.
membership
The association between an Image service VM image and a project. Enables images to be shared with specified projects.
membership list
A list of projects that can access a given VM image within Image service.
memcached
A distributed memory object caching system that is used by Object Storage for caching.
memory overcommit
The ability to start new VM instances based on the actual memory usage of a host, as opposed to basing the decision on the amount of RAM each running instance thinks it has available. Also known as RAM overcommit.
message broker
The software package used to provide AMQP messaging capabilities within Compute. Default package is RabbitMQ.
message bus
The main virtual communication line used by all AMQP messages for inter-cloud communications within Compute.
message queue
Passes requests from clients to the appropriate workers and returns the output to the client after the job completes.
Message service (zaqar)
The project that provides a messaging service that affords a variety of distributed application patterns in an efficient, scalable and highly available manner, and to create and maintain associated Python libraries and documentation.
Metadata agent
OpenStack Networking agent that provides metadata services for instances.
Meta-Data Server (MDS)
Stores CephFS metadata.
migration
The process of moving a VM instance from one host to another.
mistral
Code name for Workflow service.
Mitaka
The code name for the thirteenth release of OpenStack. The design summit took place in Tokyo, Japan. Mitaka is a city in Tokyo.
monasca
Codename for OpenStack Monitoring.
multi-host
High-availability mode for legacy (nova) networking. Each compute node handles NAT and DHCP and acts as a gateway for all of the VMs on it. A networking failure on one compute node doesn’t affect VMs on other compute nodes.
multinic
Facility in Compute that allows each virtual machine instance to have more than one VIF connected to it.
murano
Codename for the Application Catalog service.
Modular Layer 2 (ML2) neutron plug-in
Can concurrently use multiple layer-2 networking technologies, such as 802.1Q and VXLAN, in Networking.
Monitor (LBaaS)
LBaaS feature that provides availability monitoring using the ping command, TCP, and HTTP/HTTPS GET.
Monitor (Mon)
A Ceph component that communicates with external clients, checks data state and consistency, and performs quorum functions.
Monitoring (monasca)
The OpenStack service that provides a multi-tenant, highly scalable, performant, fault-tolerant monitoring-as-a-service solution for metrics, complex event processing and logging. To build an extensible platform for advanced monitoring services that can be used by both operators and tenants to gain operational insight and visibility, ensuring availability and stability.
multi-factor authentication
Authentication method that uses two or more credentials, such as a password and a private key. Currently not supported in Identity.
MultiNic
Facility in Compute that enables a virtual machine instance to have more than one VIF connected to it.

N

Nebula
Released as open source by NASA in 2010 and is the basis for Compute.
netadmin
One of the default roles in the Compute RBAC system. Enables the user to allocate publicly accessible IP addresses to instances and change firewall rules.
NetApp volume driver
Enables Compute to communicate with NetApp storage devices through the NetApp OnCommand Provisioning Manager.
network
A virtual network that provides connectivity between entities. For example, a collection of virtual ports that share network connectivity. In Networking terminology, a network is always a layer-2 network.
Network Address Translation (NAT)
Process of modifying IP address information while in transit. Supported by Compute and Networking.
network controller
A Compute daemon that orchestrates the network configuration of nodes, including IP addresses, VLANs, and bridging. Also manages routing for both public and private networks.
Network File System (NFS)
A method for making file systems available over the network. Supported by OpenStack.
network ID
Unique ID assigned to each network segment within Networking. Same as network UUID.
network manager
The Compute component that manages various network components, such as firewall rules, IP address allocation, and so on.
network namespace
Linux kernel feature that provides independent virtual networking instances on a single host with separate routing tables and interfaces. Similar to virtual routing and forwarding (VRF) services on physical network equipment.
network node
Any compute node that runs the network worker daemon.
network segment
Represents a virtual, isolated OSI layer-2 subnet in Networking.
Network Time Protocol (NTP)
Method of keeping a clock for a host or node correct via communication with a trusted, accurate time source.
Newton
The code name for the fourteenth release of OpenStack. The design summit took place in Austin, Texas, US. The release is named after “Newton House” which is located at 1013 E. Ninth St., Austin, TX. which is listed on the National Register of Historic Places.
network UUID
Unique ID for a Networking network segment.
network worker
The nova-network worker daemon; provides services such as giving an IP address to a booting nova instance.
Networking service (neutron)
The OpenStack project which implements services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction.
Networking API (Neutron API)
API used to access OpenStack Networking. Provides an extensible architecture to enable custom plug-in creation.
neutron
Codename for OpenStack Networking service.
neutron API
An alternative name for Networking API.
neutron manager
Enables Compute and Networking integration, which enables Networking to perform network management for guest VMs.
neutron plug-in
Interface within Networking that enables organizations to create custom plug-ins for advanced features, such as QoS, ACLs, or IDS.
Nexenta volume driver
Provides support for NexentaStor devices in Compute.
Nginx
An HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server.
No ACK
Disables server-side message acknowledgment in the Compute RabbitMQ. Increases performance but decreases reliability.
node
A VM instance that runs on a host.
non-durable exchange
Message exchange that is cleared when the service restarts. Its data is not written to persistent storage.
non-durable queue
Message queue that is cleared when the service restarts. Its data is not written to persistent storage.
non-persistent volume
Alternative term for an ephemeral volume.
north-south traffic
Network traffic between a user or client (north) and a server (south), or traffic into the cloud (south) and out of the cloud (north). See also east-west traffic.
nova
Codename for OpenStack Compute service.
Nova API
Alternative term for the Compute API.
nova-network
A Compute component that manages IP address allocation, firewalls, and other network-related tasks. This is the legacy networking option and an alternative to Networking.

O

object
A BLOB of data held by Object Storage; can be in any format.
object auditor
Opens all objects for an object server and verifies the MD5 hash, size, and metadata for each object.
object expiration
A configurable option within Object Storage to automatically delete objects after a specified amount of time has passed or a certain date is reached.
object hash
Unique ID for an Object Storage object.
object path hash
Used by Object Storage to determine the location of an object in the ring. Maps objects to partitions.
object replicator
An Object Storage component that copies an object to remote partitions for fault tolerance.
object server
An Object Storage component that is responsible for managing objects.
Object Storage service
The OpenStack core project that provides eventually consistent and redundant storage and retrieval of fixed digital content. The project name of OpenStack Object Storage is swift.
Object Storage API
API used to access OpenStack Object Storage.
Object Storage Device (OSD)
The Ceph storage daemon.
object versioning
Allows a user to set a flag on an Object Storage container so that all objects within the container are versioned.
Ocata
The code name for the fifteenth release of OpenStack. The design summit will take place in Barcelona, Spain. Ocata is a beach north of Barcelona.
Octavia
An operator-grade open source scalable load balancer.
Oldie
Term for an Object Storage process that runs for a long time. Can indicate a hung process.
Open Cloud Computing Interface (OCCI)
A standardized interface for managing compute, data, and network resources, currently unsupported in OpenStack.
Open Virtualization Format (OVF)
Standard for packaging VM images. Supported in OpenStack.
Open vSwitch
Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (for example NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag).
Open vSwitch (OVS) agent
Provides an interface to the underlying Open vSwitch service for the Networking plug-in.
Open vSwitch neutron plug-in
Provides support for Open vSwitch in Networking.
OpenLDAP
An open source LDAP server. Supported by both Compute and Identity.
OpenStack
OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a data center, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. OpenStack is an open source project licensed under the Apache License 2.0.
OpenStack code name
Each OpenStack release has a code name. Code names ascend in alphabetical order: Austin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, Mitaka, Newton, Ocata, Pike, and Queens. Code names are cities or counties near where the corresponding OpenStack design summit took place. An exception, called the Waldon exception, is granted to elements of the state flag that sound especially cool. Code names are chosen by popular vote.
openSUSE
A Linux distribution that is compatible with OpenStack.
operator
The person responsible for planning and maintaining an OpenStack installation.
optional service
An official OpenStack service defined as optional by DefCore Committee. Currently, consists of Dashboard (horizon), Telemetry service (Telemetry), Orchestration service (heat), Database service (trove), Bare Metal service (ironic), and so on.
Orchestration service (heat)
The OpenStack service which orchestrates composite cloud applications using a declarative template format through an OpenStack-native REST API.
orphan
In the context of Object Storage, this is a process that is not terminated after an upgrade, restart, or reload of the service.
Oslo
OpenStack project that produces a set of Python libraries containing code shared by OpenStack projects.

P

panko
Part of the OpenStack Telemetry service; provides event storage.
parent cell
If a requested resource, such as CPU time, disk storage, or memory, is not available in the parent cell, the request is forwarded to associated child cells.
partition
A unit of storage within Object Storage used to store objects. It exists on top of devices and is replicated for fault tolerance.
partition index
Contains the locations of all Object Storage partitions within the ring.
partition shift value
Used by Object Storage to determine which partition data should reside on.
path MTU discovery (PMTUD)
Mechanism in IP networks to detect end-to-end MTU and adjust packet size accordingly.
pause
A VM state where no changes occur (no changes in memory, network communications stop, etc); the VM is frozen but not shut down.
PCI passthrough
Gives guest VMs exclusive access to a PCI device. Currently supported in OpenStack Havana and later releases.
persistent message
A message that is stored both in memory and on disk. The message is not lost after a failure or restart.
persistent volume
Changes to these types of disk volumes are saved.
personality file
A file used to customize a Compute instance. It can be used to inject SSH keys or a specific network configuration.
Pike
The code name for the sixteenth release of OpenStack. The design summit will take place in Boston, Massachusetts, US. The release is named after the Massachusetts Turnpike, abbreviated commonly as the Mass Pike, which is the eastermost stretch of Interstate 90.
Platform-as-a-Service (PaaS)
Provides to the consumer the ability to deploy applications through a programming language or tools supported by the cloud platform provider. An example of Platform-as-a-Service is an Eclipse/Java programming platform provided with no downloads required.
plug-in
Software component providing the actual implementation for Networking APIs, or for Compute APIs, depending on the context.
policy service
Component of Identity that provides a rule-management interface and a rule-based authorization engine.
pool
A logical set of devices, such as web servers, that you group together to receive and process traffic. The load balancing function chooses which member of the pool handles the new requests or connections received on the VIP address. Each VIP has one pool.
pool member
An application that runs on the back-end server in a load-balancing system.
port
A virtual network port within Networking; VIFs / vNICs are connected to a port.
port UUID
Unique ID for a Networking port.
preseed
A tool to automate system configuration and installation on Debian-based Linux distributions.
private image
An Image service VM image that is only available to specified projects.
private IP address
An IP address used for management and administration, not available to the public Internet.
private network
The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. A private network interface can be a flat or VLAN network interface. A flat network interface is controlled by the flat_interface with flat managers. A VLAN network interface is controlled by the vlan_interface option with VLAN managers.
project
Projects represent the base unit of “ownership” in OpenStack, in that all resources in OpenStack should be owned by a specific project. In OpenStack Identity, a project must be owned by a specific domain.
project ID
Unique ID assigned to each project by the Identity service.
project VPN
Alternative term for a cloudpipe.
promiscuous mode
Causes the network interface to pass all traffic it receives to the host rather than passing only the frames addressed to it.
protected property
Generally, extra properties on an Image service image to which only cloud administrators have access. Limits which user roles can perform CRUD operations on that property. The cloud administrator can configure any image property as protected.
provider
An administrator who has access to all hosts and instances.
proxy node
A node that provides the Object Storage proxy service.
proxy server
Users of Object Storage interact with the service through the proxy server, which in turn looks up the location of the requested data within the ring and returns the results to the user.
public API
An API endpoint used for both service-to-service communication and end-user interactions.
public image
An Image service VM image that is available to all projects.
public IP address
An IP address that is accessible to end-users.
public key authentication
Authentication method that uses keys rather than passwords.
public network
The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. The public network interface is controlled by the public_interface option.
Puppet
An operating system configuration-management tool supported by OpenStack.
Python
Programming language used extensively in OpenStack.

Q

QEMU Copy On Write 2 (QCOW2)
One of the VM image disk formats supported by Image service.
Qpid
Message queue software supported by OpenStack; an alternative to RabbitMQ.
Quality of Service (QoS)
The ability to guarantee certain network or storage requirements to satisfy a Service Level Agreement (SLA) between an application provider and end users. Typically includes performance requirements like networking bandwidth, latency, jitter correction, and reliability as well as storage performance in Input/Output Operations Per Second (IOPS), throttling agreements, and performance expectations at peak load.
quarantine
If Object Storage finds objects, containers, or accounts that are corrupt, they are placed in this state, are not replicated, cannot be read by clients, and a correct copy is re-replicated.
Queens
The code name for the seventeenth release of OpenStack. The design summit will take place in Sydney, Australia. The release is named after the Queens Pound river in the South Coast region of New South Wales.
Quick EMUlator (QEMU)
QEMU is a generic and open source machine emulator and virtualizer. One of the hypervisors supported by OpenStack, generally used for development purposes.
quota
In Compute and Block Storage, the ability to set resource limits on a per-project basis.

R

RabbitMQ
The default message queue software used by OpenStack.
Rackspace Cloud Files
Released as open source by Rackspace in 2010; the basis for Object Storage.
RADOS Block Device (RBD)
Ceph component that enables a Linux block device to be striped over multiple distributed data stores.
radvd
The router advertisement daemon, used by the Compute VLAN manager and FlatDHCP manager to provide routing services for VM instances.
rally
Codename for the Benchmark service.
RAM filter
The Compute setting that enables or disables RAM overcommitment.
RAM overcommit
The ability to start new VM instances based on the actual memory usage of a host, as opposed to basing the decision on the amount of RAM each running instance thinks it has available. Also known as memory overcommit.
rate limit
Configurable option within Object Storage to limit database writes on a per-account and/or per-container basis.
raw
One of the VM image disk formats supported by Image service; an unstructured disk image.
rebalance
The process of distributing Object Storage partitions across all drives in the ring; used during initial ring creation and after ring reconfiguration.
reboot
Either a soft or hard reboot of a server. With a soft reboot, the operating system is signaled to restart, which enables a graceful shutdown of all processes. A hard reboot is the equivalent of power cycling the server. The virtualization platform should ensure that the reboot action has completed successfully, even in cases in which the underlying domain/VM is paused or halted/stopped.
rebuild
Removes all data on the server and replaces it with the specified image. Server ID and IP addresses remain the same.
Recon
An Object Storage component that collects meters.
record
Belongs to a particular domain and is used to specify information about the domain. There are several types of DNS records. Each record type contains particular information used to describe the purpose of that record. Examples include mail exchange (MX) records, which specify the mail server for a particular domain; and name server (NS) records, which specify the authoritative name servers for a domain.
record ID
A number within a database that is incremented each time a change is made. Used by Object Storage when replicating.
Red Hat Enterprise Linux (RHEL)
A Linux distribution that is compatible with OpenStack.
reference architecture
A recommended architecture for an OpenStack cloud.
region
A discrete OpenStack environment with dedicated API endpoints that typically shares only the Identity (keystone) with other regions.
registry
Alternative term for the Image service registry.
registry server
An Image service that provides VM image metadata information to clients.
Reliable, Autonomic Distributed Object Store

(RADOS)

A collection of components that provides object storage within Ceph. Similar to OpenStack Object Storage.

Remote Procedure Call (RPC)
The method used by the Compute RabbitMQ for intra-service communications.
replica
Provides data redundancy and fault tolerance by creating copies of Object Storage objects, accounts, and containers so that they are not lost when the underlying storage fails.
replica count
The number of replicas of the data in an Object Storage ring.
replication
The process of copying data to a separate physical device for fault tolerance and performance.
replicator
The Object Storage back-end process that creates and manages object replicas.
request ID
Unique ID assigned to each request sent to Compute.
rescue image
A special type of VM image that is booted when an instance is placed into rescue mode. Allows an administrator to mount the file systems for an instance to correct the problem.
resize
Converts an existing server to a different flavor, which scales the server up or down. The original server is saved to enable rollback if a problem occurs. All resizes must be tested and explicitly confirmed, at which time the original server is removed.
RESTful
A kind of web service API that uses REST, or Representational State Transfer. REST is the style of architecture for hypermedia systems that is used for the World Wide Web.
ring
An entity that maps Object Storage data to partitions. A separate ring exists for each service, such as account, object, and container.
ring builder
Builds and manages rings within Object Storage, assigns partitions to devices, and pushes the configuration to other storage nodes.
Role Based Access Control (RBAC)
Provides a predefined list of actions that the user can perform, such as start or stop VMs, reset passwords, and so on. Supported in both Identity and Compute and can be configured using the horizon dashboard.
role
A personality that a user assumes to perform a specific set of operations. A role includes a set of rights and privileges. A user assuming that role inherits those rights and privileges.
role ID
Alphanumeric ID assigned to each Identity service role.
rootwrap
A feature of Compute that allows the unprivileged “nova” user to run a specified list of commands as the Linux root user.
round-robin scheduler
Type of Compute scheduler that evenly distributes instances among available hosts.
router
A physical or virtual network device that passes network traffic between different networks.
routing key
The Compute direct exchanges, fanout exchanges, and topic exchanges use this key to determine how to process a message; processing varies depending on exchange type.
RPC driver
Modular system that allows the underlying message queue software of Compute to be changed. For example, from RabbitMQ to ZeroMQ or Qpid.
rsync
Used by Object Storage to push object replicas.
RXTX cap
Absolute limit on the amount of network traffic a Compute VM instance can send and receive.
RXTX quota
Soft limit on the amount of network traffic a Compute VM instance can send and receive.

S

sahara
Codename for the Data Processing service.
SAML assertion
Contains information about a user as provided by the identity provider. It is an indication that a user has been authenticated.
scheduler manager
A Compute component that determines where VM instances should start. Uses modular design to support a variety of scheduler types.
scoped token
An Identity service API access token that is associated with a specific project.
scrubber
Checks for and deletes unused VMs; the component of Image service that implements delayed delete.
secret key
String of text known only by the user; used along with an access key to make requests to the Compute API.
secure boot
Process whereby the system firmware validates the authenticity of the code involved in the boot process.
secure shell (SSH)
Open source tool used to access remote hosts through an encrypted communications channel, SSH key injection is supported by Compute.
security group
A set of network traffic filtering rules that are applied to a Compute instance.
segmented object
An Object Storage large object that has been broken up into pieces. The re-assembled object is called a concatenated object.
self-service
For IaaS, ability for a regular (non-privileged) account to manage a virtual infrastructure component such as networks without involving an administrator.
SELinux
Linux kernel security module that provides the mechanism for supporting access control policies.
senlin
OpenStack project that provides a Clustering service.
server
Computer that provides explicit services to the client software running on that system, often managing a variety of computer operations. A server is a VM instance in the Compute system. Flavor and image are requisite elements when creating a server.
server image
Alternative term for a VM image.
server UUID
Unique ID assigned to each guest VM instance.
service
An OpenStack service, such as Compute, Object Storage, or Image service. Provides one or more endpoints through which users can access resources and perform operations.
service catalog
Alternative term for the Identity service catalog.
service ID
Unique ID assigned to each service that is available in the Identity service catalog.
service provider
A system that provides services to other system entities. In case of federated identity, OpenStack Identity is the service provider.
service registration
An Identity service feature that enables services, such as Compute, to automatically register with the catalog.
service project
Special project that contains all services that are listed in the catalog.
service token
An administrator-defined token used by Compute to communicate securely with the Identity service.
session back end
The method of storage used by horizon to track client sessions, such as local memory, cookies, a database, or memcached.
session persistence
A feature of the load-balancing service. It attempts to force subsequent connections to a service to be redirected to the same node as long as it is online.
session storage
A horizon component that stores and tracks client session information. Implemented through the Django sessions framework.
share
A remote, mountable file system in the context of the Shared File Systems service. You can mount a share to, and access a share from, several hosts by several users at a time.
share network
An entity in the context of the Shared File Systems service that encapsulates interaction with the Networking service. If the driver you selected runs in the mode requiring such kind of interaction, you need to specify the share network to create a share.
Shared File Systems API
A Shared File Systems service that provides a stable RESTful API. The service authenticates and routes requests throughout the Shared File Systems service. There is python-manilaclient to interact with the API.
Shared File Systems service (manila)
The service that provides a set of services for management of shared file systems in a multi-tenant cloud environment, similar to how OpenStack provides block-based storage management through the OpenStack Block Storage service project. With the Shared File Systems service, you can create a remote file system and mount the file system on your instances. You can also read and write data from your instances to and from your file system.
shared IP address
An IP address that can be assigned to a VM instance within the shared IP group. Public IP addresses can be shared across multiple servers for use in various high-availability scenarios. When an IP address is shared to another server, the cloud network restrictions are modified to enable each server to listen to and respond on that IP address. You can optionally specify that the target server network configuration be modified. Shared IP addresses can be used with many standard heartbeat facilities, such as keepalive, that monitor for failure and manage IP failover.
shared IP group
A collection of servers that can share IPs with other members of the group. Any server in a group can share one or more public IPs with any other server in the group. With the exception of the first server in a shared IP group, servers must be launched into shared IP groups. A server may be a member of only one shared IP group.
shared storage
Block storage that is simultaneously accessible by multiple clients, for example, NFS.
Sheepdog
Distributed block storage system for QEMU, supported by OpenStack.
Simple Cloud Identity Management (SCIM)
Specification for managing identity in the cloud, currently unsupported by OpenStack.
Single-root I/O Virtualization (SR-IOV)
A specification that, when implemented by a physical PCIe device, enables it to appear as multiple separate PCIe devices. This enables multiple virtualized guests to share direct access to the physical device, offering improved performance over an equivalent virtual device. Currently supported in OpenStack Havana and later releases.
Service Level Agreement (SLA)
Contractual obligations that ensure the availability of a service.
SmokeStack
Runs automated tests against the core OpenStack API; written in Rails.
snapshot
A point-in-time copy of an OpenStack storage volume or image. Use storage volume snapshots to back up volumes. Use image snapshots to back up data, or as “gold” images for additional servers.
soft reboot
A controlled reboot where a VM instance is properly restarted through operating system commands.
Software Development Lifecycle Automation service
OpenStack project that aims to make cloud services easier to consume and integrate with application development process by automating the source-to-image process, and simplifying app-centric deployment. The project name is solum.
SolidFire Volume Driver
The Block Storage driver for the SolidFire iSCSI storage appliance.
solum
OpenStack project that provides a Software Development Lifecycle Automation service.
Simple Protocol for Independent Computing Environments (SPICE)
SPICE provides remote desktop access to guest virtual machines. It is an alternative to VNC. SPICE is supported by OpenStack.
spread-first scheduler
The Compute VM scheduling algorithm that attempts to start a new VM on the host with the least amount of load.
SQL-Alchemy
An open source SQL toolkit for Python, used in OpenStack.
SQLite
A lightweight SQL database, used as the default persistent storage method in many OpenStack services.
stack
A set of OpenStack resources created and managed by the Orchestration service according to a given template (either an AWS CloudFormation template or a Heat Orchestration Template (HOT)).
StackTach
Community project that captures Compute AMQP communications; useful for debugging.
static IP address
Alternative term for a fixed IP address.
StaticWeb
WSGI middleware component of Object Storage that serves container data as a static web page.
storage back end
The method that a service uses for persistent storage, such as iSCSI, NFS, or local disk.
storage node
An Object Storage node that provides container services, account services, and object services; controls the account databases, container databases, and object storage.
storage manager
A XenAPI component that provides a pluggable interface to support a wide variety of persistent storage back ends.
storage manager back end
A persistent storage method supported by XenAPI, such as iSCSI or NFS.
storage services
Collective name for the Object Storage object services, container services, and account services.
strategy
Specifies the authentication source used by Image service or Identity. In the Database service, it refers to the extensions implemented for a data store.
subdomain
A domain within a parent domain. Subdomains cannot be registered. Subdomains enable you to delegate domains. Subdomains can themselves have subdomains, so third-level, fourth-level, fifth-level, and deeper levels of nesting are possible.
subnet
Logical subdivision of an IP network.
SUSE Linux Enterprise Server (SLES)
A Linux distribution that is compatible with OpenStack.
suspend
Alternative term for a paused VM instance.
swap
Disk-based virtual memory used by operating systems to provide more memory than is actually available on the system.
swauth
An authentication and authorization service for Object Storage, implemented through WSGI middleware; uses Object Storage itself as the persistent backing store.
swift
An OpenStack core project that provides object storage services.
swift All in One (SAIO)
Creates a full Object Storage development environment within a single VM.
swift middleware
Collective term for Object Storage components that provide additional functionality.
swift proxy server
Acts as the gatekeeper to Object Storage and is responsible for authenticating the user.
swift storage node
A node that runs Object Storage account, container, and object services.
sync point
Point in time since the last container and accounts database sync among nodes within Object Storage.
sysadmin
One of the default roles in the Compute RBAC system. Enables a user to add other users to a project, interact with VM images that are associated with the project, and start and stop VM instances.
system usage
A Compute component that, along with the notification system, collects meters and usage information. This information can be used for billing.

T

Telemetry service (telemetry)
The OpenStack project which collects measurements of the utilization of the physical and virtual resources comprising deployed clouds, persists this data for subsequent retrieval and analysis, and triggers actions when defined criteria are met.
TempAuth
An authentication facility within Object Storage that enables Object Storage itself to perform authentication and authorization. Frequently used in testing and development.
Tempest
Automated software test suite designed to run against the trunk of the OpenStack core project.
TempURL
An Object Storage middleware component that enables creation of URLs for temporary object access.
tenant
A group of users; used to isolate access to Compute resources. An alternative term for a project.
Tenant API
An API that is accessible to projects.
tenant endpoint
An Identity service API endpoint that is associated with one or more projects.
tenant ID
An alternative term for project ID.
token
An alpha-numeric string of text used to access OpenStack APIs and resources.
token services
An Identity service component that manages and validates tokens after a user or project has been authenticated.
tombstone
Used to mark Object Storage objects that have been deleted; ensures that the object is not updated on another node after it has been deleted.
topic publisher
A process that is created when a RPC call is executed; used to push the message to the topic exchange.
Torpedo
Community project used to run automated tests against the OpenStack API.
transaction ID
Unique ID assigned to each Object Storage request; used for debugging and tracing.
transient
Alternative term for non-durable.
transient exchange
Alternative term for a non-durable exchange.
transient message
A message that is stored in memory and is lost after the server is restarted.
transient queue
Alternative term for a non-durable queue.
TripleO
OpenStack-on-OpenStack program. The code name for the OpenStack Deployment program.
trove
Codename for OpenStack Database service.
trusted platform module (TPM)
Specialized microprocessor for incorporating cryptographic keys into devices for authenticating and securing a hardware platform.

U

Ubuntu
A Debian-based Linux distribution.
unscoped token
Alternative term for an Identity service default token.
updater
Collective term for a group of Object Storage components that processes queued and failed updates for containers and objects.
user
In OpenStack Identity, entities represent individual API consumers and are owned by a specific domain. In OpenStack Compute, a user can be associated with roles, projects, or both.
user data
A blob of data that the user can specify when they launch an instance. The instance can access this data through the metadata service or config drive. Commonly used to pass a shell script that the instance runs on boot.
User Mode Linux (UML)
An OpenStack-supported hypervisor.

V

VIF UUID
Unique ID assigned to each Networking VIF.
Virtual Central Processing Unit (vCPU)
Subdivides physical CPUs. Instances can then use those divisions.
Virtual Disk Image (VDI)
One of the VM image disk formats supported by Image service.
Virtual Extensible LAN (VXLAN)
A network virtualization technology that attempts to reduce the scalability problems associated with large cloud computing deployments. It uses a VLAN-like encapsulation technique to encapsulate Ethernet frames within UDP packets.
Virtual Hard Disk (VHD)
One of the VM image disk formats supported by Image service.
virtual IP address (VIP)
An Internet Protocol (IP) address configured on the load balancer for use by clients connecting to a service that is load balanced. Incoming connections are distributed to back-end nodes based on the configuration of the load balancer.
virtual machine (VM)
An operating system instance that runs on top of a hypervisor. Multiple VMs can run at the same time on the same physical host.
virtual network
An L2 network segment within Networking.
virtual networking
A generic term for virtualization of network functions such as switching, routing, load balancing, and security using a combination of VMs and overlays on physical network infrastructure.
Virtual Network Computing (VNC)
Open source GUI and CLI tools used for remote console access to VMs. Supported by Compute.
Virtual Network InterFace (VIF)
An interface that is plugged into a port in a Networking network. Typically a virtual network interface belonging to a VM.
virtual port
Attachment point where a virtual interface connects to a virtual network.
virtual private network (VPN)
Provided by Compute in the form of cloudpipes, specialized instances that are used to create VPNs on a per-project basis.
virtual server
Alternative term for a VM or guest.
virtual switch (vSwitch)
Software that runs on a host or node and provides the features and functions of a hardware-based network switch.
virtual VLAN
Alternative term for a virtual network.
VirtualBox
An OpenStack-supported hypervisor.
VLAN manager
A Compute component that provides dnsmasq and radvd and sets up forwarding to and from cloudpipe instances.
VLAN network
The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. A VLAN network is a private network interface, which is controlled by the vlan_interface option with VLAN managers.
VM disk (VMDK)
One of the VM image disk formats supported by Image service.
VM image
Alternative term for an image.
VM Remote Control (VMRC)
Method to access VM instance consoles using a web browser. Supported by Compute.
VMware API
Supports interaction with VMware products in Compute.
VMware NSX Neutron plug-in
Provides support for VMware NSX in Neutron.
VNC proxy
A Compute component that provides users access to the consoles of their VM instances through VNC or VMRC.
volume
Disk-based data storage generally represented as an iSCSI target with a file system that supports extended attributes; can be persistent or ephemeral.
Volume API
Alternative name for the Block Storage API.
volume controller
A Block Storage component that oversees and coordinates storage volume actions.
volume driver
Alternative term for a volume plug-in.
volume ID
Unique ID applied to each storage volume under the Block Storage control.
volume manager
A Block Storage component that creates, attaches, and detaches persistent storage volumes.
volume node
A Block Storage node that runs the cinder-volume daemon.
volume plug-in
Provides support for new and specialized types of back-end storage for the Block Storage volume manager.
volume worker
A cinder component that interacts with back-end storage to manage the creation and deletion of volumes and the creation of compute volumes, provided by the cinder-volume daemon.
vSphere
An OpenStack-supported hypervisor.

W

weighting
A Compute process that determines the suitability of the VM instances for a job for a particular host. For example, not enough RAM on the host, too many CPUs on the host, and so on.
weight
Used by Object Storage devices to determine which storage devices are suitable for the job. Devices are weighted by size.
weighted cost
The sum of each cost used when deciding where to start a new VM instance in Compute.
worker
A daemon that listens to a queue and carries out tasks in response to messages. For example, the cinder-volume worker manages volume creation and deletion on storage arrays.
Workflow service (mistral)
The OpenStack service that provides a simple YAML-based language to write workflows (tasks and transition rules) and a service that allows to upload them, modify, run them at scale and in a highly available manner, manage and monitor workflow execution state and state of individual tasks.

X

Xen
Xen is a hypervisor using a microkernel design, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently.
Xen API
The Xen administrative API, which is supported by Compute.
Xen Cloud Platform (XCP)
An OpenStack-supported hypervisor.
Xen Storage Manager Volume Driver
A Block Storage volume plug-in that enables communication with the Xen Storage Manager API.
XenServer
An OpenStack-supported hypervisor.
XFS
High-performance 64-bit file system created by Silicon Graphics. Excels in parallel I/O operations and data consistency.

Z

zaqar
Codename for the Message service.
ZeroMQ
Message queue software supported by OpenStack. An alternative to RabbitMQ. Also spelled 0MQ.
Zuul
Tool used in OpenStack development to ensure correctly ordered testing of changes in parallel.

Search in this guide

Creative Commons Attribution 3.0 License

Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.