This document is for system administrators who want to look up configuration options. It contains lists of configuration options available with OpenStack and uses auto-generation to generate options and the descriptions from the code for each project. It includes sample configuration files.
The OpenStack documentation uses several typesetting conventions.
Notices take these forms:
Note
A comment with additional information that explains a part of the text.
Important
Something you must be aware of before proceeding.
Tip
An extra but helpful piece of practical advice.
Caution
Helpful information that prevents the user from making mistakes.
Warning
Critical information about the risk of data loss or security issues.
$ command
Any user, including the root
user, can run commands that are
prefixed with the $
prompt.
# command
The root
user must run commands that are prefixed with the #
prompt. You can also prefix these commands with the sudo
command, if available, to run them.
OpenStack uses the INI file format for configuration
files. An INI file is a simple text file that specifies options as
key=value
pairs, grouped into sections.
The DEFAULT
section contains most of the configuration options.
Lines starting with a hash sign (#
) are comment lines.
For example:
[DEFAULT]
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
debug = true
[database]
# The SQLAlchemy connection string used to connect to the
# database (string value)
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
Options can have different types for values.
The comments in the sample config files always mention these and the
tables mention the Opt
value as first item like (BoolOpt) Toggle...
.
The following types are used by OpenStack:
BoolOpt
)Enables or disables an option. The allowed values are true
and false
.
# Enable the experimental use of database reconnect on
# connection lost (boolean value)
use_db_reconnect = false
FloatOpt
)A floating point number like 0.25
or 1000
.
# Sleep time in seconds for polling an ongoing async task
# (floating point value)
task_poll_interval = 0.5
IntOpt
)An integer number is a number without fractional components,
like 0
or 42
.
# The port which the OpenStack Compute service listens on.
# (integer value)
compute_port = 8774
IPOpt
)An IPv4 or IPv6 address.
# Address to bind the server. Useful when selecting a particular network
# interface. (ip address value)
bind_host = 0.0.0.0
DictOpt
)A key-value pairs, also known as a dictionary. The key value pairs are
separated by commas and a colon is used to separate key and value.
Example: key1:value1,key2:value2
.
# Parameter for l2_l3 workflow setup. (dict value)
l2_l3_setup_params = data_ip_address:192.168.200.99, \
data_ip_mask:255.255.255.0,data_port:1,gateway:192.168.200.1,ha_port:2
ListOpt
)Represents values of other types, separated by commas.
As an example, the following sets allowed_rpc_exception_modules
to a list containing the four elements oslo.messaging.exceptions
,
nova.exception
, cinder.exception
, and exceptions
:
# Modules of exceptions that are permitted to be recreated
# upon receiving exception data from an rpc call. (list value)
allowed_rpc_exception_modules = oslo.messaging.exceptions,nova.exception
MultiStrOpt
)A multi-valued option is a string value and can be given more than once, all values will be used.
# Driver or drivers to handle sending notifications. (multi valued)
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier
PortOpt
)A TCP/IP port number. Ports can range from 1 to 65535.
# Port to which the UDP socket is bound. (port value)
# Minimum value: 1
# Maximum value: 65535
udp_port = 4952
StrOpt
)Strings can be optionally enclosed with single or double quotes.
# Enables or disables publication of error events. (boolean value)
publish_errors = false
# The format for an instance that is passed with the log message.
# (string value)
instance_format = "[instance: %(uuid)s] "
Configuration options are grouped by section. Most configuration files support at least the following sections:
The configuration file supports variable substitution.
After you set a configuration option, it can be referenced
in later configuration values when you precede it with
a $
, like $OPTION
.
The following example uses the values of rabbit_host
and
rabbit_port
to define the value of the rabbit_hosts
option, in this case as controller:5672
.
# The RabbitMQ broker address where a single node is used.
# (string value)
rabbit_host = controller
# The RabbitMQ broker port where a single node is used.
# (integer value)
rabbit_port = 5672
# RabbitMQ HA cluster host:port pairs. (list value)
rabbit_hosts = $rabbit_host:$rabbit_port
To avoid substitution, use $$
, it is replaced by a single $
.
For example, if your LDAP DNS password is $xkj432
, specify it, as follows:
ldap_dns_password = $$xkj432
The code uses the Python string.Template.safe_substitute()
method to implement variable substitution.
For more details on how variable substitution is resolved, see
http://docs.python.org/2/library/string.html#template-strings
and PEP 292.
To include whitespace in a configuration value, use a quoted string. For example:
ldap_dns_password='a password with spaces'
Most services and the *-manage
command-line clients load
the configuration file.
To define an alternate location for the configuration file,
pass the --config-file CONFIG_FILE
parameter
when you start a service or call a *-manage
command.
OpenStack Newton introduces the ability to reload (or ‘mutate’) certain configuration options at runtime without a service restart. The following projects support this:
Check individual options to discover if they are mutable.
A common use case is to enable debug logging after a failure. Use the mutable
config option called ‘debug’ to do this (providing log_config_append
has not been set). An admin user may perform the following steps:
nova.conf
) and change ‘debug’ to True
.pkill -HUP nova
).A log message will be written out confirming that the option has been changed. If you use a CMS like Ansible, Chef, or Puppet, we recommend scripting these steps through your CMS.
OpenStack is a collection of open source project components that enable setting up cloud services. Each component uses similar configuration techniques and a common framework for INI file options.
This guide pulls together multiple references and configuration options for the following OpenStack components:
Also, OpenStack uses many shared service and libraries, such as database connections and RPC messaging, whose configuration options are described at Common configurations.
This chapter describes the common configurations for shared service and libraries.
All requests to the API may only be performed by an authenticated agent.
The preferred authentication system is Identity service.
To authenticate, an agent issues an authentication request to an Identity service endpoint. In response to valid credentials, Identity service responds with an authentication token and a service catalog that contains a list of all services and endpoints available for the given token.
Multiple endpoints may be returned for each OpenStack service according to physical locations and performance/availability characteristics of different deployments.
Normally, Identity service middleware provides the X-Project-Id
header
based on the authentication token submitted by the service client.
For this to work, clients must specify a valid authentication token in the
X-Auth-Token
header for each request to each OpenStack service API.
The API validates authentication tokens against Identity service before
servicing each request.
If authentication is not enabled, clients must provide the X-Project-Id
header themselves.
Configure the authentication and authorization strategy through these options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
auth_strategy = keystone |
(String) This determines the strategy to use for authentication: keystone or noauth2. ‘noauth2’ is designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username. |
Configuration option = Default value | Description |
---|---|
[keystone_authtoken] | |
admin_password = None |
(String) Service user password. |
admin_tenant_name = admin |
(String) Service tenant name. |
admin_token = None |
(String) This option is deprecated and may be removed in a future release. Single shared secret with the Keystone configuration used for bootstrapping a Keystone installation, or otherwise bypassing the normal authentication process. This option should not be used, use admin_user and admin_password instead. |
admin_user = None |
(String) Service username. |
auth_admin_prefix = |
(String) Prefix to prepend at the beginning of the path. Deprecated, use identity_uri. |
auth_host = 127.0.0.1 |
(String) Host providing the admin Identity API endpoint. Deprecated, use identity_uri. |
auth_port = 35357 |
(Integer) Port of the admin Identity API endpoint. Deprecated, use identity_uri. |
auth_protocol = https |
(String) Protocol of the admin Identity API endpoint. Deprecated, use identity_uri. |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
auth_uri = None |
(String) Complete “public” Identity API endpoint. This endpoint should not be an “admin” endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you’re using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. |
auth_version = None |
(String) API version of the admin Identity API endpoint. |
cache = None |
(String) Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. |
cafile = None |
(String) A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. |
certfile = None |
(String) Required if identity server requires client certificate |
check_revocations_for_cached = False |
(Boolean) If true, the revocation list will be checked for cached tokens. This requires that PKI tokens are configured on the identity server. |
delay_auth_decision = False |
(Boolean) Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. |
enforce_token_bind = permissive |
(String) Used to control the use and type of token binding. Can be set to: “disabled” to not check token binding. “permissive” (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. “strict” like “permissive” but if the bind type is unknown the token will be rejected. “required” any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. |
hash_algorithms = md5 |
(List) Hash algorithms to use for hashing PKI tokens. This may be a single algorithm or multiple. The algorithms are those supported by Python standard hashlib.new(). The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first hash will be stored in the cache. This will typically be set to multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are expired this option should be set to a single value for better performance. |
http_connect_timeout = None |
(Integer) Request timeout value for communicating with Identity API server. |
http_request_max_retries = 3 |
(Integer) How many times are we trying to reconnect when communicating with Identity API Server. |
identity_uri = None |
(String) Complete admin Identity API endpoint. This should specify the unversioned root endpoint e.g. https://localhost:35357/ |
include_service_catalog = True |
(Boolean) (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) Required if identity server requires client certificate |
memcache_pool_conn_get_timeout = 10 |
(Integer) (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. |
memcache_pool_dead_retry = 300 |
(Integer) (Optional) Number of seconds memcached server is considered dead before it is tried again. |
memcache_pool_maxsize = 10 |
(Integer) (Optional) Maximum total number of open connections to every memcached server. |
memcache_pool_socket_timeout = 3 |
(Integer) (Optional) Socket timeout in seconds for communicating with a memcached server. |
memcache_pool_unused_timeout = 60 |
(Integer) (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. |
memcache_secret_key = None |
(String) (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. |
memcache_security_strategy = None |
(String) (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. |
memcache_use_advanced_pool = False |
(Boolean) (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. |
memcached_servers = None |
(List) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. |
region_name = None |
(String) The region in which the identity server can be found. |
revocation_cache_time = 10 |
(Integer) Determines the frequency at which the list of revoked tokens is retrieved from the Identity service (in seconds). A high number of revocation events combined with a low cache duration may significantly reduce performance. Only valid for PKI tokens. |
signing_dir = None |
(String) Directory used to cache files related to PKI tokens. |
token_cache_time = 300 |
(Integer) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. |
The cache configuration options allow the deployer to control how an application uses this library.
These options are supported by:
For a complete list of all available cache configuration options, see olso.cache configuration options.
You can configure OpenStack services to use any SQLAlchemy-compatible database.
To ensure that the database schema is current, run the following command:
# SERVICE-manage db sync
To configure the connection string for the database, use the configuration option settings documented in the table Description of database configuration options.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
db_driver = SERVICE.db |
(String) DEPRECATED: The driver to use for database access |
[database] | |
backend = sqlalchemy |
(String) The back end to use for the database. |
connection = None |
(String) The SQLAlchemy connection string to use to connect to the database. |
connection_debug = 0 |
(Integer) Verbosity of SQL debugging information: 0=None, 100=Everything. |
connection_trace = False |
(Boolean) Add Python stack traces to SQL as comment strings. |
db_inc_retry_interval = True |
(Boolean) If True, increases the interval between retries of a database operation up to db_max_retry_interval. |
db_max_retries = 20 |
(Integer) Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. |
db_max_retry_interval = 10 |
(Integer) If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. |
db_retry_interval = 1 |
(Integer) Seconds between retries of a database transaction. |
idle_timeout = 3600 |
(Integer) Timeout before idle SQL connections are reaped. |
max_overflow = 50 |
(Integer) If set, use this value for max_overflow with SQLAlchemy. |
max_pool_size = None |
(Integer) Maximum number of SQL connections to keep open in a pool. |
max_retries = 10 |
(Integer) Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. |
min_pool_size = 1 |
(Integer) Minimum number of SQL connections to keep open in a pool. |
mysql_sql_mode = TRADITIONAL |
(String) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= |
pool_timeout = None |
(Integer) If set, use this value for pool_timeout with SQLAlchemy. |
retry_interval = 10 |
(Integer) Interval between retries of opening a SQL connection. |
slave_connection = None |
(String) The SQLAlchemy connection string to use to connect to the slave database. |
sqlite_db = oslo.sqlite |
(String) The file name to use with SQLite. |
sqlite_synchronous = True |
(Boolean) If True, SQLite uses synchronous mode. |
use_db_reconnect = False |
(Boolean) Enable the experimental use of database reconnect on connection lost. |
use_tpool = False |
(Boolean) Enable the experimental use of thread pooling for all DB API calls |
You can configure where the service logs events, the level of logging, and log formats.
To customize logging for the service, use the configuration option settings documented in the table Description of common logging configuration options.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
debug = False |
(Boolean) If set to true, the logging level will be set to DEBUG instead of the default INFO level. |
default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN, urllib3.connectionpool=WARN, websocket=WARN, requests.packages.urllib3.util.retry=WARN, urllib3.util.retry=WARN, keystonemiddleware=WARN, routes.middleware=WARN, stevedore=WARN, taskflow=WARN, keystoneauth=WARN, oslo.cache=INFO, dogpile.core.dogpile=INFO |
(List) List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. |
fatal_deprecations = False |
(Boolean) Enables or disables fatal status of deprecations. |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal |
instance_format = "[instance: %(uuid)s] " |
(String) The format for an instance that is passed with the log message. |
instance_uuid_format = "[instance: %(uuid)s] " |
(String) The format for an instance UUID that is passed with the log message. |
log_config_append = None |
(String) The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, logging_context_format_string). |
log_date_format = %Y-%m-%d %H:%M:%S |
(String) Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. |
log_dir = None |
(String) (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. |
log_file = None |
(String) (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. |
logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s |
(String) Format string to use for log messages with context. |
logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d |
(String) Additional data to append to log message when logging level for the message is DEBUG. |
logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s |
(String) Format string to use for log messages when context is undefined. |
logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s |
(String) Prefix each line of exception output with this format. |
logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s |
(String) Defines the format string for %(user_identity)s that is used in logging_context_format_string. |
publish_errors = False |
(Boolean) Enables or disables publication of error events. |
syslog_log_facility = LOG_USER |
(String) Syslog facility to receive log lines. This option is ignored if log_config_append is set. |
use_stderr = True |
(Boolean) Log output to standard error. This option is ignored if log_config_append is set. |
use_syslog = False |
(Boolean) Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. |
verbose = True |
(Boolean) DEPRECATED: If set to false, the logging level will be set to WARNING instead of the default INFO level. |
watch_log_file = False |
(Boolean) Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. |
The policy configuration options allow the deployer to control where the policy files are located and the default rule to apply when policy.
Configuration option = Default value | Description |
---|---|
[oslo_policy] | |
policy_default_rule = default |
(String) Default rule. Enforced when a requested rule is not found. |
policy_dirs = ['policy.d'] |
(Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. |
policy_file = policy.json |
(String) The JSON file that defines policies. |
OpenStack services use Advanced Message Queuing Protocol (AMQP), an open standard for messaging middleware. This messaging middleware enables the OpenStack services that run on multiple servers to talk to each other. OpenStack Oslo RPC supports two implementations of AMQP: RabbitMQ and ZeroMQ.
Use these options to configure the RPC messaging driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
control_exchange = openstack |
(String) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. |
default_publisher_id = None |
(String) Default publisher_id for outgoing notifications |
transport_url = None |
(String) A URL representing the messaging driver to use and its full configuration. If not set, we fall back to the rpc_backend option and driver specific configuration. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
notification_format = both |
(String) Specifies which notification format shall be used by nova. |
rpc_backend = rabbit |
(String) The messaging driver to use, defaults to rabbit. Other drivers include amqp and zmq. |
rpc_cast_timeout = -1 |
(Integer) Seconds to wait before a cast expires (TTL). The default value of -1 specifies an infinite linger period. The value of 0 specifies no linger period. Pending messages shall be discarded immediately when the socket is closed. Only supported by impl_zmq. |
rpc_conn_pool_size = 30 |
(Integer) Size of RPC connection pool. |
rpc_poll_timeout = 1 |
(Integer) The default number of seconds that poll should wait. Poll raises timeout exception when timeout expired. |
rpc_response_timeout = 60 |
(Integer) Seconds to wait for a response from a call. |
[cells] | |
rpc_driver_queue_base = cells.intercell |
(String) RPC driver queue base When sending a message to another cell by JSON-ifying the message and making an RPC cast to ‘process_message’, a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this. Possible values: * The base queue name to be used when communicating between cells. Services which consume this: * nova-cells Related options: * None |
[oslo_concurrency] | |
disable_process_locking = False |
(Boolean) Enables or disables inter-process locks. |
lock_path = None |
(String) Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. |
[oslo_messaging] | |
event_stream_topic = neutron_lbaas_event |
(String) topic name for receiving events from a queue |
[oslo_messaging_amqp] | |
allow_insecure_clients = False |
(Boolean) Accept clients using either SSL or plain TCP |
broadcast_prefix = broadcast |
(String) address prefix used when broadcasting to all servers |
container_name = None |
(String) Name for the AMQP container |
group_request_prefix = unicast |
(String) address prefix when sending to any server in group |
idle_timeout = 0 |
(Integer) Timeout for inactive connections (in seconds) |
password = |
(String) Password for message broker authentication |
sasl_config_dir = |
(String) Path to directory that contains the SASL configuration |
sasl_config_name = |
(String) Name of configuration file (without .conf suffix) |
sasl_mechanisms = |
(String) Space separated list of acceptable SASL mechanisms |
server_request_prefix = exclusive |
(String) address prefix used when sending to a specific server |
ssl_ca_file = |
(String) CA certificate PEM file to verify server certificate |
ssl_cert_file = |
(String) Identifying certificate PEM file to present to clients |
ssl_key_file = |
(String) Private key PEM file used to sign cert_file certificate |
ssl_key_password = None |
(String) Password for decrypting ssl_key_file (if encrypted) |
trace = False |
(Boolean) Debug: dump AMQP frames to stdout |
username = |
(String) User name for message broker authentication |
[oslo_messaging_notifications] | |
driver = [] |
(Multi-valued) The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop |
topics = notifications |
(List) AMQP topic used for OpenStack notifications. |
transport_url = None |
(String) A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. |
[upgrade_levels] | |
baseapi = None |
(String) Set a version cap for messages sent to the base api in any service |
OpenStack Oslo RPC uses RabbitMQ
by default.
The rpc_backend
option is not required as long as RabbitMQ
is the default messaging system. However, if it is included
in the configuration, you must set it to rabbit
:
rpc_backend = rabbit
You can configure messaging communication for different installation
scenarios, tune retries for RabbitMQ, and define the size of the RPC
thread pool. To monitor notifications through RabbitMQ
,
you must set the notification_driver
option to
nova.openstack.common.notifier.rpc_notifier
.
The default value for sending usage data is sixty seconds plus
a random number of seconds from zero to sixty.
Use the options described in the table below to configure the
RabbitMQ
message system.
Configuration option = Default value | Description |
---|---|
[oslo_messaging_rabbit] | |
amqp_auto_delete = False |
(Boolean) Auto-delete queues in AMQP. |
amqp_durable_queues = False |
(Boolean) Use durable queues in AMQP. |
channel_max = None |
(Integer) Maximum number of channels to allow |
default_notification_exchange = ${control_exchange}_notification |
(String) Exchange name for for sending notifications |
default_notification_retry_attempts = -1 |
(Integer) Reconnecting retry count in case of connectivity problem during sending notification, -1 means infinite retry. |
default_rpc_exchange = ${control_exchange}_rpc |
(String) Exchange name for sending RPC messages |
default_rpc_retry_attempts = -1 |
(Integer) Reconnecting retry count in case of connectivity problem during sending RPC message, -1 means infinite retry. If actual retry attempts in not 0 the rpc request could be processed more then one time |
fake_rabbit = False |
(Boolean) Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake |
frame_max = None |
(Integer) The maximum byte size for an AMQP frame |
heartbeat_interval = 1 |
(Integer) How often to send heartbeats for consumer’s connections |
heartbeat_rate = 2 |
(Integer) How often times during the heartbeat_timeout_threshold we check the heartbeat. |
heartbeat_timeout_threshold = 60 |
(Integer) Number of seconds after which the Rabbit broker is considered down if heartbeat’s keep-alive fails (0 disable the heartbeat). EXPERIMENTAL |
host_connection_reconnect_delay = 0.25 |
(Floating point) Set delay for reconnection to some host which has connection error |
kombu_compression = None |
(String) EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may notbe available in future versions. |
kombu_failover_strategy = round-robin |
(String) Determines how the next RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. |
kombu_missing_consumer_retry_timeout = 60 |
(Integer) How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. |
kombu_reconnect_delay = 1.0 |
(Floating point) How long to wait before reconnecting in response to an AMQP consumer cancel notification. |
kombu_ssl_ca_certs = |
(String) SSL certification authority file (valid only if SSL enabled). |
kombu_ssl_certfile = |
(String) SSL cert file (valid only if SSL enabled). |
kombu_ssl_keyfile = |
(String) SSL key file (valid only if SSL enabled). |
kombu_ssl_version = |
(String) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. |
notification_listener_prefetch_count = 100 |
(Integer) Max number of not acknowledged message which RabbitMQ can send to notification listener. |
notification_persistence = False |
(Boolean) Persist notification messages. |
notification_retry_delay = 0.25 |
(Floating point) Reconnecting retry delay in case of connectivity problem during sending notification message |
pool_max_overflow = 0 |
(Integer) Maximum number of connections to create above pool_max_size. |
pool_max_size = 10 |
(Integer) Maximum number of connections to keep queued. |
pool_recycle = 600 |
(Integer) Lifetime of a connection (since creation) in seconds or None for no recycling. Expired connections are closed on acquire. |
pool_stale = 60 |
(Integer) Threshold at which inactive (since release) connections are considered stale in seconds or None for no staleness. Stale connections are closed on acquire. |
pool_timeout = 30 |
(Integer) Default number of seconds to wait for a connections to available |
rabbit_ha_queues = False |
(Boolean) Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: “rabbitmqctl set_policy HA ‘^(?!amq.).*’ ‘{“ha-mode”: “all”}’ “ |
rabbit_host = localhost |
(String) The RabbitMQ broker address where a single node is used. |
rabbit_hosts = $rabbit_host:$rabbit_port |
(List) RabbitMQ HA cluster host:port pairs. |
rabbit_interval_max = 30 |
(Integer) Maximum interval of RabbitMQ connection retries. Default is 30 seconds. |
rabbit_login_method = AMQPLAIN |
(String) The RabbitMQ login method. |
rabbit_max_retries = 0 |
(Integer) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count). |
rabbit_password = guest |
(String) The RabbitMQ password. |
rabbit_port = 5672 |
(Port number) The RabbitMQ broker port where a single node is used. |
rabbit_qos_prefetch_count = 0 |
(Integer) Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. |
rabbit_retry_backoff = 2 |
(Integer) How long to backoff for between retries when connecting to RabbitMQ. |
rabbit_retry_interval = 1 |
(Integer) How frequently to retry connecting with RabbitMQ. |
rabbit_transient_queues_ttl = 1800 |
(Integer) Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. |
rabbit_use_ssl = False |
(Boolean) Connect over SSL for RabbitMQ. |
rabbit_userid = guest |
(String) The RabbitMQ userid. |
rabbit_virtual_host = / |
(String) The RabbitMQ virtual host. |
rpc_listener_prefetch_count = 100 |
(Integer) Max number of not acknowledged message which RabbitMQ can send to rpc listener. |
rpc_queue_expiration = 60 |
(Integer) Time to live for rpc queues without consumers in seconds. |
rpc_reply_exchange = ${control_exchange}_rpc_reply |
(String) Exchange name for receiving RPC replies |
rpc_reply_listener_prefetch_count = 100 |
(Integer) Max number of not acknowledged message which RabbitMQ can send to rpc reply listener. |
rpc_reply_retry_attempts = -1 |
(Integer) Reconnecting retry count in case of connectivity problem during sending reply. -1 means infinite retry during rpc_timeout |
rpc_reply_retry_delay = 0.25 |
(Floating point) Reconnecting retry delay in case of connectivity problem during sending reply. |
rpc_retry_delay = 0.25 |
(Floating point) Reconnecting retry delay in case of connectivity problem during sending RPC message |
socket_timeout = 0.25 |
(Floating point) Set socket timeout in seconds for connection’s socket |
ssl = None |
(Boolean) Enable SSL |
ssl_options = None |
(Dict) Arguments passed to ssl.wrap_socket |
tcp_user_timeout = 0.25 |
(Floating point) Set TCP_USER_TIMEOUT in seconds for connection’s socket |
Use these options to configure the ZeroMQ
messaging system for OpenStack
Oslo RPC. ZeroMQ
is not the default messaging system, so you must enable
it by setting the rpc_backend
option.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
rpc_zmq_bind_address = * |
(String) ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. The “host” option should point or resolve to this address. |
rpc_zmq_bind_port_retries = 100 |
(Integer) Number of retries to find free port number before fail with ZMQBindError. |
rpc_zmq_concurrency = eventlet |
(String) Type of concurrency used. Either “native” or “eventlet” |
rpc_zmq_contexts = 1 |
(Integer) Number of ZeroMQ contexts, defaults to 1. |
rpc_zmq_host = localhost |
(String) Name of this node. Must be a valid hostname, FQDN, or IP address. Must match “host” option, if running Nova. |
rpc_zmq_ipc_dir = /var/run/openstack |
(String) Directory for holding IPC sockets. |
rpc_zmq_matchmaker = redis |
(String) MatchMaker driver. |
rpc_zmq_max_port = 65536 |
(Integer) Maximal port number for random ports range. |
rpc_zmq_min_port = 49152 |
(Port number) Minimal port number for random ports range. |
rpc_zmq_topic_backlog = None |
(Integer) Maximum number of ingress messages to locally buffer per topic. Default is unlimited. |
use_pub_sub = True |
(Boolean) Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. |
zmq_target_expire = 120 |
(Integer) Expiration timeout in seconds of a name service record about existing target ( < 0 means no timeout). |
Cross-Origin Resource Sharing (CORS) is a mechanism that allows code running in a browser (JavaScript for example) to make requests to a domain, other than the one it was originated from. OpenStack services support CORS requests.
For more information, see cross-project features in OpenStack Administrator Guide, CORS in Dashboard, and CORS in Object Storage service.
For a complete list of all available CORS configuration options, see CORS configuration options.
The Application Catalog service can be configured by changing the following options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
admin_role = admin |
(String) Role used to identify an authenticated user as administrator. |
max_header_line = 16384 |
(Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) The HTTP Header that will be used to determine which the original request protocol scheme was, even if it was removed by an SSL terminator proxy. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[oslo_policy] | |
policy_default_rule = default |
(String) Default rule. Enforced when a requested rule is not found. |
policy_dirs = ['policy.d'] |
(Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. |
policy_file = policy.json |
(String) The JSON file that defines policies. |
[paste_deploy] | |
config_file = None |
(String) Path to Paste config file |
flavor = None |
(String) Paste flavor |
Configuration option = Default value | Description |
---|---|
[cfapi] | |
auth_url = localhost:5000 |
(String) Authentication URL |
bind_host = localhost |
(String) Host for service broker |
bind_port = 8083 |
(String) Port for service broker |
packages_service = murano |
(String) Package service which should be used by service broker |
project_domain_name = default |
(String) Domain name of the project |
tenant = admin |
(String) Project for service broker |
user_domain_name = default |
(String) Domain name of the user |
These options can also be set in the murano.conf
file.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backlog = 4096 |
(Integer) Number of backlog requests to configure the socket with |
bind_host = 0.0.0.0 |
(String) Address to bind the Murano API server to. |
bind_port = 8082 |
(Port number) Port the bind the Murano API server to. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
file_server = |
(String) Set a file server. |
home_region = None |
(String) Default region name used to get services endpoints. |
metadata_dir = ./meta |
(String) Metadata dir |
publish_errors = False |
(Boolean) Enables or disables publication of error events. |
tcp_keepidle = 600 |
(Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. |
use_router_proxy = True |
(Boolean) Use ROUTER remote proxy. |
[murano] | |
api_limit_max = 100 |
(Integer) Maximum number of packages to be returned in a single pagination request |
api_workers = None |
(Integer) Number of API workers |
cacert = None |
(String) (SSL) Tells Murano to use the specified client certificate file when communicating with Murano API used by Murano engine. |
cert_file = None |
(String) (SSL) Tells Murano to use the specified client certificate file when communicating with Murano used by Murano engine. |
enabled_plugins = None |
(List) List of enabled Extension Plugins. Remove or leave commented to enable all installed plugins. |
endpoint_type = publicURL |
(String) Murano endpoint type used by Murano engine. |
insecure = False |
(Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers used by Murano engine. |
key_file = None |
(String) (SSL/SSH) Private key file name to communicate with Murano API used by Murano engine. |
limit_param_default = 20 |
(Integer) Default value for package pagination in API. |
package_size_limit = 5 |
(Integer) Maximum application package size, Mb |
url = None |
(String) Optional murano url in format like http://0.0.0.0:8082 used by Murano engine |
[stats] | |
period = 5 |
(Integer) Statistics collection interval in minutes.Default value is 5 minutes. |
Configuration option = Default value | Description |
---|---|
[engine] | |
agent_timeout = 3600 |
(Integer) Time for waiting for a response from murano agent during the deployment |
class_configs = /etc/murano/class-configs |
(String) Path to class configuration files |
disable_murano_agent = False |
(Boolean) Disallow the use of murano-agent |
enable_model_policy_enforcer = False |
(Boolean) Enable model policy enforcer using Congress |
enable_packages_cache = True |
(Boolean) Enables murano-engine to persist on disk packages downloaded during deployments. The packages would be re-used for consequent deployments. |
engine_workers = None |
(Integer) Number of engine workers |
load_packages_from = |
(List) List of directories to load local packages from. If not provided, packages will be loaded only API |
packages_cache = None |
(String) Location (directory) for Murano package cache. |
packages_service = murano |
(String) The service to store murano packages: murano (stands for legacy behavior using murano-api) or glance (stands for glance-glare artifact service) |
use_trusts = True |
(Boolean) Create resources using trust token rather than user’s token |
Configuration option = Default value | Description |
---|---|
[glare] | |
ca_file = None |
(String) (SSL) Tells Murano to use the specified certificate file to verify the peer running Glare API. |
cert_file = None |
(String) (SSL) Tells Murano to use the specified client certificate file when communicating with Glare. |
endpoint_type = publicURL |
(String) Glare endpoint type. |
insecure = False |
(Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Glare API. |
key_file = None |
(String) (SSL/SSH) Private key file name to communicate with Glare API. |
url = None |
(String) Optional glare url in format like http://0.0.0.0:9494 used by Glare API |
Configuration option = Default value | Description |
---|---|
[heat] | |
ca_file = None |
(String) (SSL) Tells Murano to use the specified certificate file to verify the peer running Heat API. |
cert_file = None |
(String) (SSL) Tells Murano to use the specified client certificate file when communicating with Heat. |
endpoint_type = publicURL |
(String) Heat endpoint type. |
insecure = False |
(Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Heat API. |
key_file = None |
(String) (SSL/SSH) Private key file name to communicate with Heat API. |
stack_tags = murano |
(List) List of tags to be assigned to heat stacks created during environment deployment. |
url = None |
(String) Optional heat endpoint override |
Configuration option = Default value | Description |
---|---|
[mistral] | |
ca_cert = None |
(String) (SSL) Tells Murano to use the specified client certificate file when communicating with Mistral. |
endpoint_type = publicURL |
(String) Mistral endpoint type. |
insecure = False |
(Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Mistral. |
service_type = workflowv2 |
(String) Mistral service type. |
url = None |
(String) Optional mistral endpoint override |
Configuration option = Default value | Description |
---|---|
[networking] | |
create_router = True |
(Boolean) This option will create a router when one with “router_name” does not exist |
default_dns = |
(List) List of default DNS nameservers to be assigned to created Networks |
driver = None |
(String) Network driver to use. Options are neutron or nova.If not provided, the driver will be detected. |
env_ip_template = 10.0.0.0 |
(String) Template IP address for generating environment subnet cidrs |
external_network = ext-net |
(String) ID or name of the external network for routers to connect to |
max_environments = 250 |
(Integer) Maximum number of environments that use a single router per tenant |
max_hosts = 250 |
(Integer) Maximum number of VMs per environment |
network_config_file = netconfig.yaml |
(String) If provided networking configuration will be taken from this file |
router_name = murano-default-router |
(String) Name of the router that going to be used in order to join all networks created by Murano |
[neutron] | |
ca_cert = None |
(String) (SSL) Tells Murano to use the specified client certificate file when communicating with Neutron. |
endpoint_type = publicURL |
(String) Neutron endpoint type. |
insecure = False |
(Boolean) This option explicitly allows Murano to perform “insecure” SSL connections and transfers with Neutron API. |
url = None |
(String) Optional neutron endpoint override |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Option = default value | (Type) Help string |
---|---|
[cfapi] packages_service = murano |
(StrOpt) Package service which should be used by service broker |
[engine] engine_workers = None |
(IntOpt) Number of engine workers |
[murano] api_workers = None |
(IntOpt) Number of API workers |
[networking] driver = None |
(StrOpt) Network driver to use. Options are neutron or nova.If not provided, the driver will be detected. |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
[engine] workers |
[engine] engine_workers |
This chapter describes the Application Catalog service configuration options.
Note
The common configurations for shared services and libraries, such as database connections and RPC messaging, are described at Common configurations.
The following options allow configuration of the APIs that Bare Metal service supports.
Configuration option = Default value | Description |
---|---|
[api] | |
api_workers = None |
(Integer) Number of workers for OpenStack Ironic API service. The default is equal to the number of CPUs available if that can be determined, else a default worker count of 1 is returned. |
enable_ssl_api = False |
(Boolean) Enable the integrated stand-alone API to service requests via HTTPS instead of HTTP. If there is a front-end service performing HTTPS offloading from the service, this option should be False; note, you will want to change public API endpoint to represent SSL termination URL with ‘public_endpoint’ option. |
host_ip = 0.0.0.0 |
(String) The IP address on which ironic-api listens. |
max_limit = 1000 |
(Integer) The maximum number of items returned in a single response from a collection resource. |
port = 6385 |
(Port number) The TCP port on which ironic-api listens. |
public_endpoint = None |
(String) Public URL to use when building the links to the API resources (for example, “https://ironic.rocks:6384”). If None the links will be built using the request’s host URL. If the API is operating behind a proxy, you will want to change this to represent the proxy’s URL. Defaults to None. |
ramdisk_heartbeat_timeout = 300 |
(Integer) Maximum interval (in seconds) for agent heartbeats. |
restrict_lookup = True |
(Boolean) Whether to restrict the lookup API to only nodes in certain states. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[oslo_versionedobjects] | |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal |
The following tables provide a comprehensive list of the Bare Metal service configuration options.
Configuration option = Default value | Description |
---|---|
[agent] | |
agent_api_version = v1 |
(String) API version to use for communicating with the ramdisk agent. |
deploy_logs_collect = on_failure |
(String) Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never. |
deploy_logs_local_path = /var/log/ironic/deploy |
(String) The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to “local”. |
deploy_logs_storage_backend = local |
(String) The name of the storage backend where the logs will be stored. |
deploy_logs_swift_container = ironic_deploy_logs_container |
(String) The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to “swift”. |
deploy_logs_swift_days_to_expire = 30 |
(Integer) Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to “swift”. |
manage_agent_boot = True |
(Boolean) Whether Ironic will manage booting of the agent ramdisk. If set to False, you will need to configure your mechanism to allow booting the agent ramdisk. |
memory_consumed_by_agent = 0 |
(Integer) The memory size in MiB consumed by agent when it is booted on a bare metal node. This is used for checking if the image can be downloaded and deployed on the bare metal node after booting agent ramdisk. This may be set according to the memory consumed by the agent ramdisk image. |
post_deploy_get_power_state_retries = 6 |
(Integer) Number of times to retry getting power state to check if bare metal node has been powered off after a soft power off. |
post_deploy_get_power_state_retry_interval = 5 |
(Integer) Amount of time (in seconds) to wait between polling power state after trigger soft poweroff. |
stream_raw_images = True |
(Boolean) Whether the agent ramdisk should stream raw images directly onto the disk or not. By streaming raw images directly onto the disk the agent ramdisk will not spend time copying the image to a tmpfs partition (therefore consuming less memory) prior to writing it to the disk. Unless the disk where the image will be copied to is really slow, this option should be set to True. Defaults to True. |
Configuration option = Default value | Description |
---|---|
[amt] | |
action_wait = 10 |
(Integer) Amount of time (in seconds) to wait, before retrying an AMT operation |
awake_interval = 60 |
(Integer) Time interval (in seconds) for successive awake call to AMT interface, this depends on the IdleTimeout setting on AMT interface. AMT Interface will go to sleep after 60 seconds of inactivity by default. IdleTimeout=0 means AMT will not go to sleep at all. Setting awake_interval=0 will disable awake call. |
max_attempts = 3 |
(Integer) Maximum number of times to attempt an AMT operation, before failing |
protocol = http |
(String) Protocol used for AMT endpoint |
Configuration option = Default value | Description |
---|---|
[audit] | |
audit_map_file = /etc/ironic/ironic_api_audit_map.conf |
(String) Path to audit map file for ironic-api service. Used only when API audit is enabled. |
enabled = False |
(Boolean) Enable auditing of API requests (for ironic-api service). |
ignore_req_list = None |
(String) Comma separated list of Ironic REST API HTTP methods to be ignored during audit. For example: auditing will not be done on any GET or POST requests if this is set to “GET,POST”. It is used only when API audit is enabled. |
namespace = openstack |
(String) namespace prefix for generated id |
[audit_middleware_notifications] | |
driver = None |
(String) The Driver to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop. If not specified, then value from oslo_messaging_notifications conf section is used. |
topics = None |
(List) List of AMQP topics used for OpenStack notifications. If not specified, then value from oslo_messaging_notifications conf section is used. |
transport_url = None |
(String) A URL representing messaging driver to use for notification. If not specified, we fall back to the same configuration used for RPC. |
Configuration option = Default value | Description |
---|---|
[cimc] | |
action_interval = 10 |
(Integer) Amount of time in seconds to wait in between power operations |
max_retry = 6 |
(Integer) Number of times a power operation needs to be retried |
[cisco_ucs] | |
action_interval = 5 |
(Integer) Amount of time in seconds to wait in between power operations |
max_retry = 6 |
(Integer) Number of times a power operation needs to be retried |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
bindir = /usr/local/bin |
(String) Directory where ironic binaries are installed. |
debug_tracebacks_in_api = False |
(Boolean) Return server tracebacks in the API response for any error responses. WARNING: this is insecure and should not be used in a production environment. |
default_network_interface = None |
(String) Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. |
enabled_drivers = pxe_ipmitool |
(List) Specify the list of drivers to load during service initialization. Missing drivers, or drivers which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented drivers. A complete list of drivers present on your system may be found by enumerating the “ironic.drivers” entrypoint. An example may be found in the developer documentation online. |
enabled_network_interfaces = flat, noop |
(List) Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. This value must be the same on all ironic-conductor and ironic-api services, because it is used by ironic-api service to validate a new or updated node’s network_interface value. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
fatal_exception_format_errors = False |
(Boolean) Used if there is a formatting error when generating an exception message (a programming error). If True, raise an exception; if False, use the unformatted message. |
force_raw_images = True |
(Boolean) If True, convert backing images to “raw” disk image format. |
grub_config_template = $pybasedir/common/grub_conf.template |
(String) Template file for grub configuration file. |
hash_distribution_replicas = 1 |
(Integer) [Experimental Feature] Number of hosts to map onto each hash partition. Setting this to more than one will cause additional conductor services to prepare deployment environments and potentially allow the Ironic cluster to recover more quickly if a conductor instance is terminated. |
hash_partition_exponent = 5 |
(Integer) Exponent to determine number of hash partitions to use when distributing load across conductors. Larger values will result in more even distribution of load and less load when rebalancing the ring, but more memory usage. Number of partitions per conductor is (2^hash_partition_exponent). This determines the granularity of rebalancing: given 10 hosts, and an exponent of the 2, there are 40 partitions in the ring.A few thousand partitions should make rebalancing smooth in most cases. The default is suitable for up to a few hundred conductors. Too many partitions has a CPU impact. |
hash_ring_reset_interval = 180 |
(Integer) Interval (in seconds) between hash ring resets. |
host = localhost |
(String) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address. |
isolinux_bin = /usr/lib/syslinux/isolinux.bin |
(String) Path to isolinux binary file. |
isolinux_config_template = $pybasedir/common/isolinux_config.template |
(String) Template file for isolinux configuration file. |
my_ip = 127.0.0.1 |
(String) IP address of this host. If unset, will determine the IP programmatically. If unable to do so, will use “127.0.0.1”. |
notification_level = None |
(String) Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset. |
parallel_image_downloads = False |
(Boolean) Run image downloads and raw format conversions in parallel. |
pybasedir = /usr/lib/python/site-packages/ironic/ironic |
(String) Directory where the ironic python module is installed. |
rootwrap_config = /etc/ironic/rootwrap.conf |
(String) Path to the rootwrap configuration file to use for running commands as root. |
state_path = $pybasedir |
(String) Top-level directory for maintaining ironic’s state. |
tempdir = /tmp |
(String) Temporary working directory, default is Python temp dir. |
[ironic_lib] | |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal. |
root_helper = sudo ironic-rootwrap /etc/ironic/rootwrap.conf |
(String) Command that is prefixed to commands that are run as root. If not specified, no commands are run as root. |
Configuration option = Default value | Description |
---|---|
[conductor] | |
api_url = None |
(String) URL of Ironic API service. If not set ironic can get the current value from the keystone service catalog. |
automated_clean = True |
(Boolean) Enables or disables automated cleaning. Automated cleaning is a configurable set of steps, such as erasing disk drives, that are performed on the node to ensure it is in a baseline state and ready to be deployed to. This is done after instance deletion as well as during the transition from a “manageable” to “available” state. When enabled, the particular steps performed to clean a node depend on which driver that node is managed by; see the individual driver’s documentation for details. NOTE: The introduction of the cleaning operation causes instance deletion to take significantly longer. In an environment where all tenants are trusted (eg, because there is only one tenant), this option could be safely disabled. |
check_provision_state_interval = 60 |
(Integer) Interval between checks of provision timeouts, in seconds. |
clean_callback_timeout = 1800 |
(Integer) Timeout (seconds) to wait for a callback from the ramdisk doing the cleaning. If the timeout is reached the node will be put in the “clean failed” provision state. Set to 0 to disable timeout. |
configdrive_swift_container = ironic_configdrive_container |
(String) Name of the Swift container to store config drive data. Used when configdrive_use_swift is True. |
configdrive_use_swift = False |
(Boolean) Whether to upload the config drive to Swift. |
deploy_callback_timeout = 1800 |
(Integer) Timeout (seconds) to wait for a callback from a deploy ramdisk. Set to 0 to disable timeout. |
force_power_state_during_sync = True |
(Boolean) During sync_power_state, should the hardware power state be set to the state recorded in the database (True) or should the database be updated based on the hardware state (False). |
heartbeat_interval = 10 |
(Integer) Seconds between conductor heart beats. |
heartbeat_timeout = 60 |
(Integer) Maximum time (in seconds) since the last check-in of a conductor. A conductor is considered inactive when this time has been exceeded. |
inspect_timeout = 1800 |
(Integer) Timeout (seconds) for waiting for node inspection. 0 - unlimited. |
node_locked_retry_attempts = 3 |
(Integer) Number of attempts to grab a node lock. |
node_locked_retry_interval = 1 |
(Integer) Seconds to sleep between node lock attempts. |
periodic_max_workers = 8 |
(Integer) Maximum number of worker threads that can be started simultaneously by a periodic task. Should be less than RPC thread pool size. |
power_state_sync_max_retries = 3 |
(Integer) During sync_power_state failures, limit the number of times Ironic should try syncing the hardware node power state with the node power state in DB |
send_sensor_data = False |
(Boolean) Enable sending sensor data message via the notification bus |
send_sensor_data_interval = 600 |
(Integer) Seconds between conductor sending sensor data message to ceilometer via the notification bus. |
send_sensor_data_types = ALL |
(List) List of comma separated meter types which need to be sent to Ceilometer. The default value, “ALL”, is a special value meaning send all the sensor data. |
sync_local_state_interval = 180 |
(Integer) When conductors join or leave the cluster, existing conductors may need to update any persistent local state as nodes are moved around the cluster. This option controls how often, in seconds, each conductor will check for nodes that it should “take over”. Set it to a negative value to disable the check entirely. |
sync_power_state_interval = 60 |
(Integer) Interval between syncing the node power state to the database, in seconds. |
workers_pool_size = 100 |
(Integer) The size of the workers greenthread pool. Note that 2 threads will be reserved by the conductor itself for handling heart beats and periodic tasks. |
Configuration option = Default value | Description |
---|---|
[console] | |
subprocess_checking_interval = 1 |
(Integer) Time interval (in seconds) for checking the status of console subprocess. |
subprocess_timeout = 10 |
(Integer) Time (in seconds) to wait for the console subprocess to start. |
terminal = shellinaboxd |
(String) Path to serial console terminal program. Used only by Shell In A Box console. |
terminal_cert_dir = None |
(String) Directory containing the terminal SSL cert (PEM) for serial console access. Used only by Shell In A Box console. |
terminal_pid_dir = None |
(String) Directory for holding terminal pid files. If not specified, the temporary directory will be used. |
Configuration option = Default value | Description |
---|---|
[drac] | |
query_raid_config_job_status_interval = 120 |
(Integer) Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
pecan_debug = False |
(Boolean) Enable pecan debug mode. WARNING: this is insecure and should not be used in a production environment. |
Configuration option = Default value | Description |
---|---|
[deploy] | |
continue_if_disk_secure_erase_fails = False |
(Boolean) Defines what to do if an ATA secure erase operation fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue. |
erase_devices_metadata_priority = None |
(Integer) Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning. |
erase_devices_priority = None |
(Integer) Priority to run in-band erase devices via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 10 for the GenericHardwareManager). If set to 0, will not run during cleaning. |
http_root = /httpboot |
(String) ironic-conductor node’s HTTP root path. |
http_url = None |
(String) ironic-conductor node’s HTTP server URL. Example: http://192.1.2.3:8080 |
power_off_after_deploy_failure = True |
(Boolean) Whether to power off a node after deploy failure. Defaults to True. |
shred_final_overwrite_with_zeros = True |
(Boolean) Whether to write zeros to a node’s block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True. |
shred_random_overwrite_iterations = 1 |
(Integer) During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1. |
Configuration option = Default value | Description |
---|---|
[dhcp] | |
dhcp_provider = neutron |
(String) DHCP provider to use. “neutron” uses Neutron, and “none” uses a no-op provider. |
Configuration option = Default value | Description |
---|---|
[disk_partitioner] | |
check_device_interval = 1 |
(Integer) After Ironic has completed creating the partition table, it continues to check for activity on the attached iSCSI device status at this interval prior to copying the image to the node, in seconds |
check_device_max_retries = 20 |
(Integer) The maximum number of times to check that the device is not accessed by another process. If the device is still busy after that, the disk partitioning will be treated as having failed. |
[disk_utils] | |
bios_boot_partition_size = 1 |
(Integer) Size of BIOS Boot partition in MiB when configuring GPT partitioned systems for local boot in BIOS. |
dd_block_size = 1M |
(String) Block size to use when writing to the nodes disk. |
efi_system_partition_size = 200 |
(Integer) Size of EFI system partition in MiB when configuring UEFI systems for local boot. |
iscsi_verify_attempts = 3 |
(Integer) Maximum attempts to verify an iSCSI connection is active, sleeping 1 second between attempts. |
Configuration option = Default value | Description |
---|---|
[glance] | |
allowed_direct_url_schemes = |
(List) A list of URL schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file]. |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_strategy = keystone |
(String) Authentication strategy to use when connecting to glance. |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
glance_api_insecure = False |
(Boolean) Allow to perform insecure SSL (https) requests to glance. |
glance_api_servers = None |
(List) A list of the glance api servers available to ironic. Prefix with https:// for SSL-based glance API servers. Format is [hostname|IP]:port. |
glance_cafile = None |
(String) Optional path to a CA certificate bundle to be used to validate the SSL certificate served by glance. It is used when glance_api_insecure is set to False. |
glance_host = $my_ip |
(String) Default glance hostname or IP address. |
glance_num_retries = 0 |
(Integer) Number of retries when downloading an image from glance. |
glance_port = 9292 |
(Port number) Default glance port. |
glance_protocol = http |
(String) Default protocol to use when connecting to glance. Set to https for SSL. |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
swift_account = None |
(String) The account that Glance uses to communicate with Swift. The format is “AUTH_uuid”. “uuid” is the UUID for the account configured in the glance-api.conf. Required for temporary URLs when Glance backend is Swift. For example: “AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30”. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id” |
swift_api_version = v1 |
(String) The Swift API version to create a temporary URL for. Defaults to “v1”. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id” |
swift_container = glance |
(String) The Swift container Glance is configured to store its images in. Defaults to “glance”, which is the default in glance-api.conf. Swift temporary URL format: “endpoint_url/api_version/[account/]container/object_id” |
swift_endpoint_url = None |
(String) The “endpoint” (scheme, hostname, optional port) for the Swift URL of the form “endpoint_url/api_version/[account/]container/object_id”. Do not include trailing “/”. For example, use “https://swift.example.com”. If using RADOS Gateway, endpoint may also contain /swift path; if it does not, it will be appended. Required for temporary URLs. |
swift_store_multiple_containers_seed = 0 |
(Integer) This should match a config by the same name in the Glance configuration file. When set to 0, a single-tenant store will only use one container to store all images. When set to an integer value between 1 and 32, a single-tenant store will use multiple containers to store images, and this value will determine how many containers are created. |
swift_temp_url_cache_enabled = False |
(Boolean) Whether to cache generated Swift temporary URLs. Setting it to true is only useful when an image caching proxy is used. Defaults to False. |
swift_temp_url_duration = 1200 |
(Integer) The length of time in seconds that the temporary URL will be valid for. Defaults to 20 minutes. If some deploys get a 401 response code when trying to download from the temporary URL, try raising this duration. This value must be greater than or equal to the value for swift_temp_url_expected_download_start_delay |
swift_temp_url_expected_download_start_delay = 0 |
(Integer) This is the delay (in seconds) from the time of the deploy request (when the Swift temporary URL is generated) to when the IPA ramdisk starts up and URL is used for the image download. This value is used to check if the Swift temporary URL duration is large enough to let the image download begin. Also if temporary URL caching is enabled this will determine if a cached entry will still be valid when the download starts. swift_temp_url_duration value must be greater than or equal to this option’s value. Defaults to 0. |
swift_temp_url_key = None |
(String) The secret token given to Swift to allow temporary URL downloads. Required for temporary URLs. |
temp_url_endpoint_type = swift |
(String) Type of endpoint to use for temporary URLs. If the Glance backend is Swift, use “swift”; if it is CEPH with RADOS gateway, use “radosgw”. |
timeout = None |
(Integer) Timeout value for http requests |
Configuration option = Default value | Description |
---|---|
[iboot] | |
max_retry = 3 |
(Integer) Maximum retries for iBoot operations |
reboot_delay = 5 |
(Integer) Time (in seconds) to sleep between when rebooting (powering off and on again). |
retry_interval = 1 |
(Integer) Time (in seconds) between retry attempts for iBoot operations |
Configuration option = Default value | Description |
---|---|
[ilo] | |
ca_file = None |
(String) CA certificate file to validate iLO. |
clean_priority_clear_secure_boot_keys = 0 |
(Integer) Priority for clear_secure_boot_keys clean step. This step is not enabled by default. It can be enabled to clear all secure boot keys enrolled with iLO. |
clean_priority_erase_devices = None |
(Integer) DEPRECATED: Priority for erase devices clean step. If unset, it defaults to 10. If set to 0, the step will be disabled and will not run during cleaning. This configuration option is duplicated by [deploy] erase_devices_priority, please use that instead. |
clean_priority_reset_bios_to_default = 10 |
(Integer) Priority for reset_bios_to_default clean step. |
clean_priority_reset_ilo = 0 |
(Integer) Priority for reset_ilo clean step. |
clean_priority_reset_ilo_credential = 30 |
(Integer) Priority for reset_ilo_credential clean step. This step requires “ilo_change_password” parameter to be updated in nodes’s driver_info with the new password. |
clean_priority_reset_secure_boot_keys_to_default = 20 |
(Integer) Priority for reset_secure_boot_keys clean step. This step will reset the secure boot keys to manufacturing defaults. |
client_port = 443 |
(Port number) Port to be used for iLO operations |
client_timeout = 60 |
(Integer) Timeout (in seconds) for iLO operations |
default_boot_mode = auto |
(String) Default boot mode to be used in provisioning when “boot_mode” capability is not provided in the “properties/capabilities” of the node. The default is “auto” for backward compatibility. When “auto” is specified, default boot mode will be selected based on boot mode settings on the system. |
power_retry = 6 |
(Integer) Number of times a power operation needs to be retried |
power_wait = 2 |
(Integer) Amount of time in seconds to wait in between power operations |
swift_ilo_container = ironic_ilo_container |
(String) The Swift iLO container to store data. |
swift_object_expiry_timeout = 900 |
(Integer) Amount of time in seconds for Swift objects to auto-expire. |
use_web_server_for_images = False |
(Boolean) Set this to True to use http web server to host floppy images and generated boot ISO. This requires http_root and http_url to be configured in the [deploy] section of the config file. If this is set to False, then Ironic will use Swift to host the floppy images and generated boot_iso. |
Configuration option = Default value | Description |
---|---|
[inspector] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
enabled = False |
(Boolean) whether to enable inspection using ironic-inspector |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
service_url = None |
(String) ironic-inspector HTTP endpoint. If this is not set, the service catalog will be used. |
status_check_period = 60 |
(Integer) period (in seconds) to check status of nodes on inspection |
timeout = None |
(Integer) Timeout value for http requests |
Configuration option = Default value | Description |
---|---|
[ipmi] | |
min_command_interval = 5 |
(Integer) Minimum time, in seconds, between IPMI operations sent to a server. There is a risk with some hardware that setting this too low may cause the BMC to crash. Recommended setting is 5 seconds. |
retry_timeout = 60 |
(Integer) Maximum time in seconds to retry IPMI operations. There is a tradeoff when setting this value. Setting this too low may cause older BMCs to crash and require a hard reset. However, setting too high can cause the sync power state periodic task to hang when there are slow or unresponsive BMCs. |
Configuration option = Default value | Description |
---|---|
[irmc] | |
auth_method = basic |
(String) Authentication method to be used for iRMC operations |
client_timeout = 60 |
(Integer) Timeout (in seconds) for iRMC operations |
port = 443 |
(Port number) Port to be used for iRMC operations |
remote_image_server = None |
(String) IP of remote image server |
remote_image_share_name = share |
(String) share name of remote_image_server |
remote_image_share_root = /remote_image_share_root |
(String) Ironic conductor node’s “NFS” or “CIFS” root path |
remote_image_share_type = CIFS |
(String) Share type of virtual media |
remote_image_user_domain = |
(String) Domain name of remote_image_user_name |
remote_image_user_name = None |
(String) User name of remote_image_server |
remote_image_user_password = None |
(String) Password of remote_image_user_name |
sensor_method = ipmitool |
(String) Sensor data retrieval method. |
snmp_community = public |
(String) SNMP community. Required for versions “v1” and “v2c” |
snmp_port = 161 |
(Port number) SNMP port |
snmp_security = None |
(String) SNMP security name. Required for version “v3” |
snmp_version = v2c |
(String) SNMP protocol version |
Configuration option = Default value | Description |
---|---|
[iscsi] | |
portal_port = 3260 |
(Port number) The port number on which the iSCSI portal listens for incoming connections. |
Configuration option = Default value | Description |
---|---|
[keystone] | |
region_name = None |
(String) The region used for getting endpoints of OpenStack services. |
Configuration option = Default value | Description |
---|---|
[metrics] | |
agent_backend = noop |
(String) Backend for the agent ramdisk to use for metrics. Default possible backends are “noop” and “statsd”. |
agent_global_prefix = None |
(String) Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. |
agent_prepend_host = False |
(Boolean) Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. |
agent_prepend_host_reverse = True |
(Boolean) Split the prepended host value by ”.” and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names). |
agent_prepend_uuid = False |
(Boolean) Prepend the node’s Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. |
backend = noop |
(String) Backend to use for the metrics system. |
global_prefix = None |
(String) Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. |
prepend_host = False |
(Boolean) Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. |
prepend_host_reverse = True |
(Boolean) Split the prepended host value by ”.” and reverse it (to better match the reverse hierarchical form of domain names). |
Configuration option = Default value | Description |
---|---|
[metrics_statsd] | |
agent_statsd_host = localhost |
(String) Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on. |
agent_statsd_port = 8125 |
(Port number) Port for the agent ramdisk to use with the statsd backend. |
statsd_host = localhost |
(String) Host for use with the statsd backend. |
statsd_port = 8125 |
(Port number) Port to use with the statsd backend. |
Configuration option = Default value | Description |
---|---|
[neutron] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_strategy = keystone |
(String) Authentication strategy to use when connecting to neutron. Running neutron in noauth mode (related to but not affected by this setting) is insecure and should only be used for testing. |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
cleaning_network_uuid = None |
(String) Neutron network UUID for the ramdisk to be booted into for cleaning nodes. Required for “neutron” network interface. It is also required if cleaning nodes when using “flat” network interface or “neutron” DHCP provider. |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
port_setup_delay = 0 |
(Integer) Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port. |
provisioning_network_uuid = None |
(String) Neutron network UUID for the ramdisk to be booted into for provisioning nodes. Required for “neutron” network interface. |
retries = 3 |
(Integer) Client retries in the case of a failed request. |
timeout = None |
(Integer) Timeout value for http requests |
url = None |
(String) URL for connecting to neutron. Default value translates to ‘http://$my_ip:9696‘ when auth_strategy is ‘noauth’, and to discovery from Keystone catalog when auth_strategy is ‘keystone’. |
url_timeout = 30 |
(Integer) Timeout value for connecting to neutron in seconds. |
Configuration option = Default value | Description |
---|---|
[pxe] | |
default_ephemeral_format = ext4 |
(String) Default file system format for ephemeral partition, if one is created. |
image_cache_size = 20480 |
(Integer) Maximum size (in MiB) of cache for master images, including those in use. |
image_cache_ttl = 10080 |
(Integer) Maximum TTL (in minutes) for old master images in cache. |
images_path = /var/lib/ironic/images/ |
(String) On the ironic-conductor node, directory where images are stored on disk. |
instance_master_path = /var/lib/ironic/master_images |
(String) On the ironic-conductor node, directory where master instance images are stored on disk. Setting to <None> disables image caching. |
ip_version = 4 |
(String) The IP version that will be used for PXE booting. Defaults to 4. EXPERIMENTAL |
ipxe_boot_script = $pybasedir/drivers/modules/boot.ipxe |
(String) On ironic-conductor node, the path to the main iPXE script file. |
ipxe_enabled = False |
(Boolean) Enable iPXE boot. |
ipxe_timeout = 0 |
(Integer) Timeout value (in seconds) for downloading an image via iPXE. Defaults to 0 (no timeout) |
ipxe_use_swift = False |
(Boolean) Download deploy images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ‘ipxe_enabled’ option is set to true. |
pxe_append_params = nofb nomodeset vga=normal |
(String) Additional append parameters for baremetal PXE boot. |
pxe_bootfile_name = pxelinux.0 |
(String) Bootfile DHCP parameter. |
pxe_config_template = $pybasedir/drivers/modules/pxe_config.template |
(String) On ironic-conductor node, template file for PXE configuration. |
tftp_master_path = /tftpboot/master_images |
(String) On ironic-conductor node, directory where master TFTP images are stored on disk. Setting to <None> disables image caching. |
tftp_root = /tftpboot |
(String) ironic-conductor node’s TFTP root path. The ironic-conductor must have read/write access to this path. |
tftp_server = $my_ip |
(String) IP address of ironic-conductor node’s TFTP server. |
uefi_pxe_bootfile_name = bootx64.efi |
(String) Bootfile DHCP parameter for UEFI boot mode. |
uefi_pxe_config_template = $pybasedir/drivers/modules/pxe_grub_config.template |
(String) On ironic-conductor node, template file for PXE configuration for UEFI boot loader. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[seamicro] | |
action_timeout = 10 |
(Integer) Seconds to wait for power action to be completed |
max_retry = 3 |
(Integer) Maximum retries for SeaMicro operations |
Configuration option = Default value | Description |
---|---|
[service_catalog] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
timeout = None |
(Integer) Timeout value for http requests |
Configuration option = Default value | Description |
---|---|
[snmp] | |
power_timeout = 10 |
(Integer) Seconds to wait for power action to be completed |
reboot_delay = 0 |
(Integer) Time (in seconds) to sleep between when rebooting (powering off and on again) |
Configuration option = Default value | Description |
---|---|
[ssh] | |
get_vm_name_attempts = 3 |
(Integer) Number of attempts to try to get VM name used by the host that corresponds to a node’s MAC address. |
get_vm_name_retry_interval = 3 |
(Integer) Number of seconds to wait between attempts to get VM name used by the host that corresponds to a node’s MAC address. |
libvirt_uri = qemu:///system |
(String) libvirt URI. |
Configuration option = Default value | Description |
---|---|
[swift] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
swift_max_retries = 2 |
(Integer) Maximum number of times to retry a Swift request, before failing. |
timeout = None |
(Integer) Timeout value for http requests |
Configuration option = Default value | Description |
---|---|
[virtualbox] | |
port = 18083 |
(Port number) Port on which VirtualBox web service is listening. |
Option = default value | (Type) Help string |
---|---|
[DEFAULT] default_network_interface = None |
(StrOpt) Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. |
[DEFAULT] enabled_network_interfaces = flat, noop |
(ListOpt) Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the conductor service from starting. The option default is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the “ironic.hardware.interfaces.network” entrypoint. This value must be the same on all ironic-conductor and ironic-api services, because it is used by ironic-api service to validate a new or updated node’s network_interface value. |
[DEFAULT] notification_level = None |
(StrOpt) Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset. |
[agent] deploy_logs_collect = on_failure |
(StrOpt) Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never. |
[agent] deploy_logs_local_path = /var/log/ironic/deploy |
(StrOpt) The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to “local”. |
[agent] deploy_logs_storage_backend = local |
(StrOpt) The name of the storage backend where the logs will be stored. |
[agent] deploy_logs_swift_container = ironic_deploy_logs_container |
(StrOpt) The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to “swift”. |
[agent] deploy_logs_swift_days_to_expire = 30 |
(IntOpt) Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to “swift”. |
[api] ramdisk_heartbeat_timeout = 300 |
(IntOpt) Maximum interval (in seconds) for agent heartbeats. |
[api] restrict_lookup = True |
(BoolOpt) Whether to restrict the lookup API to only nodes in certain states. |
[audit] audit_map_file = /etc/ironic/ironic_api_audit_map.conf |
(StrOpt) Path to audit map file for ironic-api service. Used only when API audit is enabled. |
[audit] enabled = False |
(BoolOpt) Enable auditing of API requests (for ironic-api service). |
[audit] ignore_req_list = None |
(StrOpt) Comma separated list of Ironic REST API HTTP methods to be ignored during audit. For example: auditing will not be done on any GET or POST requests if this is set to “GET,POST”. It is used only when API audit is enabled. |
[audit] namespace = openstack |
(StrOpt) namespace prefix for generated id |
[audit_middleware_notifications] driver = None |
(StrOpt) The Driver to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop. If not specified, then value from oslo_messaging_notifications conf section is used. |
[audit_middleware_notifications] topics = None |
(ListOpt) List of AMQP topics used for OpenStack notifications. If not specified, then value from oslo_messaging_notifications conf section is used. |
[audit_middleware_notifications] transport_url = None |
(StrOpt) A URL representing messaging driver to use for notification. If not specified, we fall back to the same configuration used for RPC. |
[deploy] continue_if_disk_secure_erase_fails = False |
(BoolOpt) Defines what to do if an ATA secure erase operation fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue. |
[deploy] erase_devices_metadata_priority = None |
(IntOpt) Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning. |
[deploy] power_off_after_deploy_failure = True |
(BoolOpt) Whether to power off a node after deploy failure. Defaults to True. |
[deploy] shred_final_overwrite_with_zeros = True |
(BoolOpt) Whether to write zeros to a node’s block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True. |
[deploy] shred_random_overwrite_iterations = 1 |
(IntOpt) During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1. |
[drac] query_raid_config_job_status_interval = 120 |
(IntOpt) Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not. |
[glance] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[glance] auth_type = None |
(Opt) Authentication type to load |
[glance] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[glance] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[glance] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[glance] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[glance] timeout = None |
(IntOpt) Timeout value for http requests |
[ilo] ca_file = None |
(StrOpt) CA certificate file to validate iLO. |
[ilo] default_boot_mode = auto |
(StrOpt) Default boot mode to be used in provisioning when “boot_mode” capability is not provided in the “properties/capabilities” of the node. The default is “auto” for backward compatibility. When “auto” is specified, default boot mode will be selected based on boot mode settings on the system. |
[inspector] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[inspector] auth_type = None |
(Opt) Authentication type to load |
[inspector] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[inspector] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[inspector] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[inspector] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[inspector] timeout = None |
(IntOpt) Timeout value for http requests |
[iscsi] portal_port = 3260 |
(PortOpt) The port number on which the iSCSI portal listens for incoming connections. |
[metrics] agent_backend = noop |
(StrOpt) Backend for the agent ramdisk to use for metrics. Default possible backends are “noop” and “statsd”. |
[metrics] agent_global_prefix = None |
(StrOpt) Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. |
[metrics] agent_prepend_host = False |
(BoolOpt) Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. |
[metrics] agent_prepend_host_reverse = True |
(BoolOpt) Split the prepended host value by ”.” and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names). |
[metrics] agent_prepend_uuid = False |
(BoolOpt) Prepend the node’s Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. |
[metrics] backend = noop |
(StrOpt) Backend to use for the metrics system. |
[metrics] global_prefix = None |
(StrOpt) Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. |
[metrics] prepend_host = False |
(BoolOpt) Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. |
[metrics] prepend_host_reverse = True |
(BoolOpt) Split the prepended host value by ”.” and reverse it (to better match the reverse hierarchical form of domain names). |
[metrics_statsd] agent_statsd_host = localhost |
(StrOpt) Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on. |
[metrics_statsd] agent_statsd_port = 8125 |
(PortOpt) Port for the agent ramdisk to use with the statsd backend. |
[metrics_statsd] statsd_host = localhost |
(StrOpt) Host for use with the statsd backend. |
[metrics_statsd] statsd_port = 8125 |
(PortOpt) Port to use with the statsd backend. |
[neutron] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[neutron] auth_type = None |
(Opt) Authentication type to load |
[neutron] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[neutron] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[neutron] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[neutron] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[neutron] port_setup_delay = 0 |
(IntOpt) Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port. |
[neutron] provisioning_network_uuid = None |
(StrOpt) Neutron network UUID for the ramdisk to be booted into for provisioning nodes. Required for “neutron” network interface. |
[neutron] timeout = None |
(IntOpt) Timeout value for http requests |
[oneview] enable_periodic_tasks = True |
(BoolOpt) Whether to enable the periodic tasks for OneView driver be aware when OneView hardware resources are taken and released by Ironic or OneView users and proactively manage nodes in clean fail state according to Dynamic Allocation model of hardware resources allocation in OneView. |
[oneview] periodic_check_interval = 300 |
(IntOpt) Period (in seconds) for periodic tasks to be executed when enable_periodic_tasks=True. |
[pxe] ipxe_use_swift = False |
(BoolOpt) Download deploy images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ‘ipxe_enabled’ option is set to true. |
[service_catalog] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[service_catalog] auth_type = None |
(Opt) Authentication type to load |
[service_catalog] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[service_catalog] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[service_catalog] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[service_catalog] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[service_catalog] timeout = None |
(IntOpt) Timeout value for http requests |
[swift] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[swift] auth_type = None |
(Opt) Authentication type to load |
[swift] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[swift] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[swift] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[swift] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[swift] timeout = None |
(IntOpt) Timeout value for http requests |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] my_ip |
10.0.0.1 |
127.0.0.1 |
[neutron] url |
http://$my_ip:9696 |
None |
[pxe] uefi_pxe_bootfile_name |
elilo.efi |
bootx64.efi |
[pxe] uefi_pxe_config_template |
$pybasedir/drivers/modules/elilo_efi_pxe_config.template |
$pybasedir/drivers/modules/pxe_grub_config.template |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
[agent] heartbeat_timeout |
[api] ramdisk_heartbeat_timeout |
[deploy] erase_devices_iterations |
[deploy] shred_random_overwrite_iterations |
[keystone_authtoken] cafile |
[glance] cafile |
[keystone_authtoken] cafile |
[neutron] cafile |
[keystone_authtoken] cafile |
[service_catalog] cafile |
[keystone_authtoken] cafile |
[swift] cafile |
[keystone_authtoken] cafile |
[inspector] cafile |
[keystone_authtoken] certfile |
[service_catalog] certfile |
[keystone_authtoken] certfile |
[neutron] certfile |
[keystone_authtoken] certfile |
[glance] certfile |
[keystone_authtoken] certfile |
[inspector] certfile |
[keystone_authtoken] certfile |
[swift] certfile |
[keystone_authtoken] insecure |
[glance] insecure |
[keystone_authtoken] insecure |
[inspector] insecure |
[keystone_authtoken] insecure |
[swift] insecure |
[keystone_authtoken] insecure |
[service_catalog] insecure |
[keystone_authtoken] insecure |
[neutron] insecure |
[keystone_authtoken] keyfile |
[inspector] keyfile |
[keystone_authtoken] keyfile |
[swift] keyfile |
[keystone_authtoken] keyfile |
[neutron] keyfile |
[keystone_authtoken] keyfile |
[glance] keyfile |
[keystone_authtoken] keyfile |
[service_catalog] keyfile |
The Bare Metal service is capable of managing and provisioning physical
machines. The configuration file of this module is
/etc/ironic/ironic.conf
.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The Block Storage service provides persistent block storage resources that Compute instances can consume. This includes secondary attached storage similar to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a Block Storage device for Compute to use as a bootable persistent instance.
The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage service does not provide a shared storage solution like NFS. With the Block Storage service, you can attach a device to only one instance.
The Block Storage service provides:
cinder-api
- a WSGI app that authenticates and routes requests
throughout the Block Storage service. It supports the OpenStack APIs
only, although there is a translation that can be done through
Compute’s EC2 interface, which calls in to the Block Storage client.cinder-scheduler
- schedules and routes requests to the appropriate
volume service. Depending upon your configuration, this may be simple
round-robin scheduling to the running volume services, or it can be
more sophisticated through the use of the Filter Scheduler. The
Filter Scheduler is the default and enables filters on things like
Capacity, Availability Zone, Volume Types, and Capabilities as well
as custom filters.cinder-volume
- manages Block Storage devices, specifically the
back-end devices themselves.cinder-backup
- provides a means to back up a Block Storage volume to
OpenStack Object Storage (swift).The Block Storage service contains the following components:
Back-end Storage Devices - the Block Storage service requires some form of back-end storage that the service is built on. The default implementation is to use LVM on a local volume group named “cinder-volumes.” In addition to the base driver implementation, the Block Storage service also provides the means to add support for other storage devices to be utilized such as external Raid Arrays or other storage appliances. These back-end storage devices may have custom block sizes when using KVM or QEMU as the hypervisor.
Users and Tenants (Projects) - the Block Storage service can be
used by many different cloud computing consumers or customers
(tenants on a shared system), using role-based access assignments.
Roles control the actions that a user is allowed to perform. In the
default configuration, most actions do not require a particular role,
but this can be configured by the system administrator in the
appropriate policy.json
file that maintains the rules. A user’s
access to particular volumes is limited by tenant, but the user name
and password are assigned per user. Key pairs granting access to a
volume are enabled per user, but quotas to control resource
consumption across available hardware resources are per tenant.
For tenants, quota controls are available to limit:
You can revise the default quota values with the Block Storage CLI, so the limits placed by quotas are editable by admin users.
Volumes, Snapshots, and Backups - the basic resources offered by the Block Storage service are volumes and snapshots which are derived from volumes and volume backups:
--force True
) or in an available state.
The snapshot can then be used to create a new volume through
create from snapshot.If you use KVM or QEMU as your hypervisor, you can configure the Compute service to use Ceph RADOS block devices (RBD) for volumes.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compliant distributed file system. The platform can auto-scale to the exabyte level and beyond. It runs on commodity hardware, is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is integrated with the OpenStack cloud operating system. Due to its open-source nature, you can install and use this portable storage platform in public or private clouds.
Ceph is based on Reliable Autonomic Distributed Object Store (RADOS). RADOS distributes objects across the storage cluster and replicates objects for fault tolerance. RADOS contains the following major components:
ceph-mon
daemons on separate servers.Ceph developers recommend XFS for production deployments, Btrfs for testing, development, and any non-critical deployments. Btrfs has the correct feature set and roadmap to serve Ceph in the long-term, but XFS and ext4 provide the necessary stability for today’s deployments.
Note
If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).
For more information about usable file systems, see ceph.com/ceph-storage/file-system/.
To store and access your data, you can use the following storage systems:
Ceph exposes RADOS; you can access it through the following interfaces:
The following table contains the configuration options supported by the Ceph RADOS Block Device driver.
Note
The volume_tmp_dir
option has been deprecated and replaced by
image_conversion_dir
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
rados_connect_timeout = -1 |
(Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used. |
rados_connection_interval = 5 |
(Integer) Interval value (in seconds) between connection retries to ceph cluster. |
rados_connection_retries = 3 |
(Integer) Number of retries if connection to ceph cluster failed. |
rbd_ceph_conf = |
(String) Path to the ceph configuration file |
rbd_cluster_name = ceph |
(String) The name of ceph cluster |
rbd_flatten_volume_from_snapshot = False |
(Boolean) Flatten volumes created from snapshots to remove dependency from volume to snapshot |
rbd_max_clone_depth = 5 |
(Integer) Maximum number of nested volume clones that are taken before a flatten occurs. Set to 0 to disable cloning. |
rbd_pool = rbd |
(String) The RADOS pool where rbd volumes are stored |
rbd_secret_uuid = None |
(String) The libvirt uuid of the secret for the rbd_user volumes |
rbd_store_chunk_size = 4 |
(Integer) Volumes will be chunked into objects of this size (in megabytes). |
rbd_user = None |
(String) The RADOS client name for accessing rbd volumes - only set when using cephx authentication |
volume_tmp_dir = None |
(String) Directory where temporary image files are stored when the volume driver does not write them directly to the volume. Warning: this option is now deprecated, please use image_conversion_dir instead. |
GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes and beyond in size. More information can be found on Gluster’s homepage.
This driver enables the use of GlusterFS in a similar fashion as NFS. It supports basic volume operations, including snapshot and clone.
To use Block Storage with GlusterFS, first set the volume_driver
in
the cinder.conf
file:
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
The following table contains the configuration options supported by the GlusterFS driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
glusterfs_mount_point_base = $state_path/mnt |
(String) Base dir containing mount points for gluster shares. |
glusterfs_shares_config = /etc/cinder/glusterfs_shares |
(String) File with the list of available gluster shares |
nas_volume_prov_type = thin |
(String) Provisioning type that will be used when creating volumes. |
The default volume back end uses local volumes managed by LVM.
This driver supports different transport protocols to attach volumes, currently iSCSI and iSER.
Set the following in your cinder.conf
configuration file, and use
the following options to configure for iSCSI transport:
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
Use the following options to configure for the iSER transport:
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iser
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
lvm_conf_file = /etc/cinder/lvm.conf |
(String) LVM conf file to use for the LVM driver in Cinder; this setting is ignored if the specified file does not exist (You can also specify ‘None’ to not use a conf file even if one exists). |
lvm_max_over_subscription_ratio = 1.0 |
(Floating point) max_over_subscription_ratio setting for the LVM driver. If set, this takes precedence over the general max_over_subscription_ratio option. If None, the general option is used. |
lvm_mirrors = 0 |
(Integer) If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors + 2 PVs with available space |
lvm_suppress_fd_warnings = False |
(Boolean) Suppress leaked file descriptor warnings in LVM commands. |
lvm_type = default |
(String) Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to thin if thin is supported. |
volume_group = cinder-volumes |
(String) Name for the VG that will contain exported volumes |
Caution
When extending an existing volume which has a linked snapshot, the related
logical volume is deactivated. This logical volume is automatically
reactivated unless auto_activation_volume_list
is defined in LVM
configuration file lvm.conf
. See the lvm.conf
file for more
information.
If auto activated volumes are restricted, then include the cinder volume group into this list:
auto_activation_volume_list = [ "existingVG", "cinder-volumes" ]
This note does not apply for thinly provisioned volumes because they do not need to be deactivated.
The Network File System (NFS) is a distributed file system protocol
originally developed by Sun Microsystems in 1984. An NFS server
exports
one or more of its file systems, known as shares
.
An NFS client can mount these exported shares on its own file system.
You can perform file actions on this mounted remote file system as
if the file system were local.
The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.
The NFS driver does not actually allow an instance to access a storage
device at the block level. Instead, files are created on an NFS share
and mapped to instances, which emulates a block device.
This works in a similar way to QEMU, which stores instances in the
/var/lib/nova/instances
directory.
Creating an NFS server is outside the scope of this document.
This example assumes access to the following NFS server and mount point:
This example demonstrates the usage of this driver with one NFS server.
Set the nas_host
option to the IP address or host name of your NFS
server, and the nas_share_path
option to the NFS export path:
nas_host = 192.168.1.200
nas_share_path = /storage
Note
You can use the multiple NFS servers with cinder multi back ends feature.
Configure the enabled_backends option with
multiple values, and use the nas_host
and nas_share
options
for each back end as described above.
The below example is another method to use multiple NFS servers, and demonstrates the usage of this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.
This example assumes access to the following NFS servers and mount points:
Add your list of NFS servers to the file you specified with the
nfs_shares_config
option. For example, if the value of this option
was set to /etc/cinder/shares.txt
file, then:
# cat /etc/cinder/shares.txt
192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storage
Comments are allowed in this file. They begin with a #
.
Configure the nfs_mount_point_base
option. This is a directory
where cinder-volume
mounts all NFS shares stored in the shares.txt
file. For this example, /var/lib/cinder/nfs
is used. You can,
of course, use the default value of $state_path/mnt
.
Start the cinder-volume
service. /var/lib/cinder/nfs
should
now contain a directory for each NFS share specified in the shares.txt
file. The name of each directory is a hashed name:
# ls /var/lib/cinder/nfs/
...
46c5db75dc3a3a50a10bfd1a456a9f3f
...
You can now create volumes as you normally would:
$ nova volume-create --display-name myvol 5
# ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f
volume-a8862558-e6d6-4648-b5df-bb84f31c8935
This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.
cinder-volume
manages the mounting of the NFS shares as well as
volume creation on the shares. Keep this in mind when planning your
OpenStack architecture. If you have one master NFS server, it might
make sense to only have one cinder-volume
service to handle all
requests to that NFS server. However, if that single server is unable
to handle all requests, more than one cinder-volume
service is
needed as well as potentially more than one NFS server.Note
Regular IO flushing and syncing still stands.
Sheepdog is an open-source distributed storage system that provides a virtual storage pool utilizing internal disk of commodity servers.
Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshotting, cloning, rollback, and thin provisioning.
More information can be found on Sheepdog Project.
This driver enables the use of Sheepdog through Qemu/KVM.
Sheepdog driver supports these operations:
Set the following option in the cinder.conf
file:
volume_driver = cinder.volume.drivers.sheepdog.SheepdogDriver
The following table contains the configuration options supported by the Sheepdog driver:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
sheepdog_store_address = 127.0.0.1 |
(String) IP address of sheep daemon. |
sheepdog_store_port = 7000 |
(Port number) Port of sheep daemon. |
There is a volume back-end for Samba filesystems. Set the following in
your cinder.conf
file, and use the following options to configure it.
Note
The SambaFS driver requires qemu-img
version 1.7 or higher on Linux
nodes, and qemu-img
version 1.6 or higher on Windows nodes.
volume_driver = cinder.volume.drivers.smbfs.SmbfsDriver
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
smbfs_allocation_info_file_path = $state_path/allocation_data |
(String) The path of the automatically generated file containing information about volume disk space allocation. |
smbfs_default_volume_format = qcow2 |
(String) Default format that will be used when creating volumes if no volume format is specified. |
smbfs_mount_options = noperm,file_mode=0775,dir_mode=0775 |
(String) Mount options passed to the smbfs client. See mount.cifs man page for details. |
smbfs_mount_point_base = $state_path/mnt |
(String) Base dir containing mount points for smbfs shares. |
smbfs_oversub_ratio = 1.0 |
(Floating point) This will compare the allocated to available space on the volume destination. If the ratio exceeds this number, the destination will no longer be valid. |
smbfs_shares_config = /etc/cinder/smbfs_shares |
(String) File with the list of available smbfs shares. |
smbfs_sparsed_volumes = True |
(Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. |
smbfs_used_ratio = 0.95 |
(Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. |
Blockbridge is software that transforms commodity infrastructure into secure multi-tenant storage that operates as a programmable service. It provides automatic encryption, secure deletion, quality of service (QoS), replication, and programmable security capabilities on your choice of hardware. Blockbridge uses micro-segmentation to provide isolation that allows you to concurrently operate OpenStack, Docker, and bare-metal workflows on shared resources. When used with OpenStack, isolated management domains are dynamically created on a per-project basis. All volumes and clones, within and between projects, are automatically cryptographically isolated and implement secure deletion.
Blockbridge architecture
The Blockbridge driver is packaged with the core distribution of OpenStack. Operationally, it executes in the context of the Block Storage service. The driver communicates with an OpenStack-specific API provided by the Blockbridge EPS platform. Blockbridge optionally communicates with Identity, Compute, and Block Storage services.
Blockbridge is API driven software-defined storage. The system implements a native HTTP API that is tailored to the specific needs of OpenStack. Each Block Storage service operation maps to a single back-end API request that provides ACID semantics. The API is specifically designed to reduce, if not eliminate, the possibility of inconsistencies between the Block Storage service and external storage infrastructure in the event of hardware, software or data center failure.
OpenStack users may utilize Blockbridge interfaces to manage replication, auditing, statistics, and performance information on a per-project and per-volume basis. In addition, they can manage low-level data security functions including verification of data authenticity and encryption key delegation. Native integration with the Identity Service allows tenants to use a single set of credentials. Integration with Block storage and Compute services provides dynamic metadata mapping when using Blockbridge management APIs and tools.
Blockbridge organizes resources using descriptive identifiers called attributes. Attributes are assigned by administrators of the infrastructure. They are used to describe the characteristics of storage in an application-friendly way. Applications construct queries that describe storage provisioning constraints and the Blockbridge storage stack assembles the resources as described.
Any given instance of a Blockbridge volume driver specifies a query
for resources. For example, a query could specify
'+ssd +10.0.0.0 +6nines -production iops.reserve=1000
capacity.reserve=30%'
. This query is satisfied by selecting SSD
resources, accessible on the 10.0.0.0 network, with high resiliency, for
non-production workloads, with guaranteed IOPS of 1000 and a storage
reservation for 30% of the volume capacity specified at create time.
Queries and parameters are completely administrator defined: they
reflect the layout, resource, and organizational goals of a specific
deployment.
Blockbridge provides iSCSI access to storage. A unique iSCSI data fabric is programmatically assembled when a volume is attached to an instance. A fabric is disassembled when a volume is detached from an instance. Each volume is an isolated SCSI device that supports persistent reservations.
Whenever possible, avoid using password-based authentication. Even if you have created a role-restricted administrative user via Blockbridge, token-based authentication is preferred. You can generate persistent authentication tokens using the Blockbridge command-line tool as follows:
$ bb -H bb-mn authorization create --notes "OpenStack" --restrict none
Authenticating to https://bb-mn/api
Enter user or access token: system
Password for system:
Authenticated; token expires in 3599 seconds.
== Authorization: ATH4762894C40626410
notes OpenStack
serial ATH4762894C40626410
account system (ACT0762594C40626440)
user system (USR1B62094C40626440)
enabled yes
created at 2015-10-24 22:08:48 +0000
access type online
token suffix xaKUy3gw
restrict none
== Access Token
access token 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
*** Remember to record your access token!
Before configuring and enabling the Blockbridge volume driver, register
an OpenStack volume type and associate it with a
volume_backend_name
. In this example, a volume type, ‘Production’,
is associated with the volume_backend_name
‘blockbridge_prod’:
$ cinder type-create Production
$ cinder type-key Production volume_backend_name=blockbridge_prod
Configure the Blockbridge volume driver in /etc/cinder/cinder.conf
.
Your volume_backend_name
must match the value specified in the
cinder type-key command in the previous step.
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_prod
Configure the API endpoint and authentication. The following example uses an authentication token. You must create your own as described in Create an authentication token.
blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
By default, a single pool is configured (implied) with a default
resource query of '+openstack'
. Within Blockbridge, datastore
resources that advertise the ‘openstack’ attribute will be selected to
fulfill OpenStack provisioning requests. If you prefer a more specific
query, define a custom pool configuration.
blockbridge_pools = Production: +production +qos iops.reserve=5000
Pools support storage systems that offer multiple classes of service. You may wish to configure multiple pools to implement more sophisticated scheduling capabilities.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
blockbridge_api_host = None |
(String) IP address/hostname of Blockbridge API. |
blockbridge_api_port = None |
(Integer) Override HTTPS port to connect to Blockbridge API server. |
blockbridge_auth_password = None |
(String) Blockbridge API password (for auth scheme ‘password’) |
blockbridge_auth_scheme = token |
(String) Blockbridge API authentication scheme (token or password) |
blockbridge_auth_token = None |
(String) Blockbridge API token (for auth scheme ‘token’) |
blockbridge_auth_user = None |
(String) Blockbridge API user (for auth scheme ‘password’) |
blockbridge_default_pool = None |
(String) Default pool name if unspecified. |
blockbridge_pools = {'OpenStack': '+openstack'} |
(Dict) Defines the set of exposed pools and their associated backend query strings |
cinder.conf
example file
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | [Default]
enabled_backends = bb_devel bb_prod
[bb_prod]
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_prod
blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
blockbridge_pools = Production: +production +qos iops.reserve=5000
[bb_devel]
volume_driver = cinder.volume.drivers.blockbridge.BlockbridgeISCSIDriver
volume_backend_name = blockbridge_devel
blockbridge_api_host = [ip or dns of management cluster]
blockbridge_auth_token = 1/elvMWilMvcLAajl...3ms3U1u2KzfaMw6W8xaKUy3gw
blockbridge_pools = Development: +development
|
Volume types are exposed to tenants, pools are not. To offer
multiple classes of storage to OpenStack tenants, you should define
multiple volume types. Simply repeat the process above for each desired
type. Be sure to specify a unique volume_backend_name
and pool
configuration for each type. The
cinder.conf example included with
this documentation illustrates configuration of multiple types.
Blockbridge is freely available for testing purposes and deploys in seconds as a Docker container. This is the same container used to run continuous integration for OpenStack. For more information visit www.blockbridge.io.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cb_account_name = None |
(String) CloudByte storage specific account name. This maps to a project name in OpenStack. |
cb_add_qosgroup = {'latency': '15', 'iops': '10', 'graceallowed': 'false', 'iopscontrol': 'true', 'memlimit': '0', 'throughput': '0', 'tpcontrol': 'false', 'networkspeed': '0'} |
(Dict) These values will be used for CloudByte storage’s addQos API call. |
cb_apikey = None |
(String) Driver will use this API key to authenticate against the CloudByte storage’s management interface. |
cb_auth_group = None |
(String) This corresponds to the discovery authentication group in CloudByte storage. Chap users are added to this group. Driver uses the first user found for this group. Default value is None. |
cb_confirm_volume_create_retries = 3 |
(Integer) Will confirm a successful volume creation in CloudByte storage by making this many number of attempts. |
cb_confirm_volume_create_retry_interval = 5 |
(Integer) A retry value in seconds. Will be used by the driver to check if volume creation was successful in CloudByte storage. |
cb_confirm_volume_delete_retries = 3 |
(Integer) Will confirm a successful volume deletion in CloudByte storage by making this many number of attempts. |
cb_confirm_volume_delete_retry_interval = 5 |
(Integer) A retry value in seconds. Will be used by the driver to check if volume deletion was successful in CloudByte storage. |
cb_create_volume = {'compression': 'off', 'deduplication': 'off', 'blocklength': '512B', 'sync': 'always', 'protocoltype': 'ISCSI', 'recordsize': '16k'} |
(Dict) These values will be used for CloudByte storage’s createVolume API call. |
cb_tsm_name = None |
(String) This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte storage. A volume will be created in this TSM. |
cb_update_file_system = compression, sync, noofcopies, readonly |
(List) These values will be used for CloudByte storage’s updateFileSystem API call. |
cb_update_qos_group = iops, latency, graceallowed |
(List) These values will be used for CloudByte storage’s updateQosGroup API call. |
The Coho DataStream Scale-Out Storage allows your Block Storage service to scale seamlessly. The architecture consists of commodity storage servers with SDN ToR switches. Leveraging an SDN OpenFlow controller allows you to scale storage horizontally, while avoiding storage and network bottlenecks by intelligent load-balancing and parallelized workloads. High-performance PCIe NVMe flash, paired with traditional hard disk drives (HDD) or solid-state drives (SSD), delivers low-latency performance even with highly mixed workloads in large scale environment.
Coho Data’s storage features include real-time instance level granularity performance and capacity reporting via API or UI, and single-IP storage endpoint access.
QoS support for the Coho Data driver includes the ability to set the
following capabilities in the OpenStack Block Storage API
cinder.api.contrib.qos_specs_manage
QoS specs extension module:
The QoS keys above must be created and associated with a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:
$ cinder help qos-create
$ cinder help qos-key
$ cinder help qos-associate
Note
If you change a volume type with QoS to a new volume type without QoS, the QoS configuration settings will be removed.
Create cinder volume type.
$ cinder type-create coho-1
Edit the OpenStack Block Storage service configuration file.
The following sample, /etc/cinder/cinder.conf
, configuration lists the
relevant settings for a typical Block Storage service using a single
Coho Data storage:
[DEFAULT]
enabled_backends = coho-1
default_volume_type = coho-1
[coho-1]
volume_driver = cinder.volume.drivers.coho.CohoDriver
volume_backend_name = coho-1
nfs_shares_config = /etc/cinder/coho_shares
nas_secure_file_operations = 'false'
Add your list of Coho Datastream NFS addresses to the file you specified
with the nfs_shares_config
option. For example, if the value of this
option was set to /etc/cinder/coho_shares
, then:
$ cat /etc/cinder/coho_shares
<coho-nfs-ip>:/<export-path>
Restart the cinder-volume
service to enable Coho Data driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
coho_rpc_port = 2049 |
(Integer) RPC port to connect to Coho Data MicroArray |
CoprHD is an open source software-defined storage controller and API platform. It enables policy-based management and cloud automation of storage resources for block, object and file storage providers. For more details, see CoprHD.
EMC ViPR Controller is the commercial offering of CoprHD. These same volume drivers can also be considered as EMC ViPR Controller Block Storage drivers.
CoprHD version 3.0 is required. Refer to the CoprHD documentation for installation and configuration instructions.
If you are using these drivers to integrate with EMC ViPR Controller, use EMC ViPR Controller 3.0.
The following operations are supported:
The following table contains the configuration options specific to the CoprHD volume driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
coprhd_emulate_snapshot = False |
(Boolean) True | False to indicate if the storage array in CoprHD is VMAX or VPLEX |
coprhd_hostname = None |
(String) Hostname for the CoprHD Instance |
coprhd_password = None |
(String) Password for accessing the CoprHD Instance |
coprhd_port = 4443 |
(Port number) Port for the CoprHD Instance |
coprhd_project = None |
(String) Project to utilize within the CoprHD Instance |
coprhd_scaleio_rest_gateway_host = None |
(String) Rest Gateway IP or FQDN for Scaleio |
coprhd_scaleio_rest_gateway_port = 4984 |
(Port number) Rest Gateway Port for Scaleio |
coprhd_scaleio_rest_server_password = None |
(String) Rest Gateway Password |
coprhd_scaleio_rest_server_username = None |
(String) Username for Rest Gateway |
coprhd_tenant = None |
(String) Tenant to utilize within the CoprHD Instance |
coprhd_username = None |
(String) Username for accessing the CoprHD Instance |
coprhd_varray = None |
(String) Virtual Array to utilize within the CoprHD Instance |
scaleio_server_certificate_path = None |
(String) Server certificate path |
scaleio_verify_server_certificate = False |
(Boolean) verify server certificate |
This involves setting up the CoprHD environment first and then configuring the CoprHD Block Storage driver.
The CoprHD environment must meet specific configuration requirements to support the OpenStack Block Storage driver.
Note
Use each back end to manage one virtual array and one virtual storage pool. However, the user can have multiple instances of CoprHD Block Storage driver, sharing the same virtual array and virtual storage pool.
cinder.conf
Modify /etc/cinder/cinder.conf
by adding the following lines,
substituting values for your environment:
[coprhd-iscsi]
volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
volume_backend_name = coprhd-iscsi
coprhd_hostname = <CoprHD-Host-Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
coprhd_emulate_snapshot = True or False, True if the CoprHD vpool has VMAX or VPLEX as the backing storage
If you use the ScaleIO back end, add the following lines:
coprhd_scaleio_rest_gateway_host = <IP or FQDN>
coprhd_scaleio_rest_gateway_port = 443
coprhd_scaleio_rest_server_username = <username>
coprhd_scaleio_rest_server_password = <password>
scaleio_verify_server_certificate = True or False
scaleio_server_certificate_path = <path-of-certificate-for-validation>
Specify the driver using the enabled_backends
parameter:
enabled_backends = coprhd-iscsi
Note
To utilize the Fibre Channel driver, replace the
volume_driver
line above with:
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
Note
To utilize the ScaleIO driver, replace the volume_driver
line
above with:
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDScaleIODriver
Note
Set coprhd_emulate_snapshot
to True if the CoprHD vpool has
VMAX or VPLEX as the back-end storage. For these type of back-end
storages, when a user tries to create a snapshot, an actual volume
gets created in the back end.
Modify the rpc_response_timeout
value in /etc/cinder/cinder.conf
to
at least 5 minutes. If this entry does not already exist within the
cinder.conf
file, add it in the [DEFAULT]
section:
[DEFAULT]
...
rpc_response_timeout = 300
Now, restart the cinder-volume
service.
Volume type creation and extra specs
Create OpenStack volume types:
$ openstack volume type create <typename>
Map the OpenStack volume type to the CoprHD virtual pool:
$ openstack volume type set <typename> --property CoprHD:VPOOL=<CoprHD-PoolName>
Map the volume type created to appropriate back-end driver:
$ openstack volume type set <typename> --property volume_backend_name=<VOLUME_BACKEND_DRIVER>
cinder.conf
Add or modify the following entries if you are planning to use multiple back-end drivers:
enabled_backends = coprhddriver-iscsi,coprhddriver-fc,coprhddriver-scaleio
Add the following at the end of the file:
[coprhddriver-iscsi]
volume_driver = cinder.volume.drivers.coprhd.iscsi.EMCCoprHDISCSIDriver
volume_backend_name = EMCCoprHDISCSIDriver
coprhd_hostname = <CoprHD Host Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
[coprhddriver-fc]
volume_driver = cinder.volume.drivers.coprhd.fc.EMCCoprHDFCDriver
volume_backend_name = EMCCoprHHDFCDriver
coprhd_hostname = <CoprHD Host Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
[coprhddriver-scaleio]
volume_driver = cinder.volume.drivers.coprhd.scaleio.EMCCoprHDScaleIODriver
volume_backend_name = EMCCoprHDScaleIODriver
coprhd_hostname = <CoprHD Host Name>
coprhd_port = 4443
coprhd_username = <username>
coprhd_password = <password>
coprhd_tenant = <CoprHD-Tenant-Name>
coprhd_project = <CoprHD-Project-Name>
coprhd_varray = <CoprHD-Virtual-Array-Name>
coprhd_scaleio_rest_gateway_host = <ScaleIO Rest Gateway>
coprhd_scaleio_rest_gateway_port = 443
coprhd_scaleio_rest_server_username = <rest gateway username>
coprhd_scaleio_rest_server_password = <rest gateway password>
scaleio_verify_server_certificate = True or False
scaleio_server_certificate_path = <certificate path>
Restart the cinder-volume
service.
Volume type creation and extra specs
Setup the volume-types
and volume-type
to volume-backend
association:
$ openstack volume type create "CoprHD High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property CoprHD:VPOOL="High Performance ISCSI"
$ openstack volume type set "CoprHD High Performance ISCSI" --property volume_backend_name= EMCCoprHDISCSIDriver
$ openstack volume type create "CoprHD High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property CoprHD:VPOOL="High Performance FC"
$ openstack volume type set "CoprHD High Performance FC" --property volume_backend_name= EMCCoprHDFCDriver
$ openstack volume type create "CoprHD performance SIO"
$ openstack volume type set "CoprHD performance SIO" --property CoprHD:VPOOL="Scaled Perf"
$ openstack volume type set "CoprHD performance SIO" --property volume_backend_name= EMCCoprHDScaleIODriver
Install the ScaleIO SDC on the compute host.
The compute host must be added as the SDC to the ScaleIO MDS using the below commands:
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip List of MDM IPs
(starting with primary MDM and separated by comma)
Example:
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm --ip
10.247.78.45,10.247.78.46,10.247.78.47
This step has to be repeated whenever the SDC (compute host in this case) is rebooted.
To enable the support of consistency group and consistency group snapshot
operations, use a text editor to edit the file /etc/cinder/policy.json
and
change the values of the below fields as specified. Upon editing the file,
restart the c-api
service:
"consistencygroup:create" : "",
"consistencygroup:delete": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:update": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",
All the resources like volume, consistency group, snapshot, and consistency group snapshot will use the display name in OpenStack for naming in the back-end storage.
The Datera Elastic Data Fabric (EDF) is a scale-out storage software that turns standard, commodity hardware into a RESTful API-driven, intent-based policy controlled storage fabric for large-scale clouds. The Datera EDF integrates seamlessly with the Block Storage service. It provides storage through the iSCSI block protocol framework over the iSCSI block protocol. Datera supports all of the Block Storage services.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
datera_503_interval = 5 |
(Integer) Interval between 503 retries |
datera_503_timeout = 120 |
(Integer) Timeout for HTTP 503 retry messages |
datera_acl_allow_all = False |
(Boolean) DEPRECATED: True to set acl ‘allow_all’ on volumes created |
datera_api_port = 7717 |
(String) Datera API port. |
datera_api_version = 2 |
(String) Datera API version. |
datera_debug = False |
(Boolean) True to set function arg and return logging |
datera_debug_replica_count_override = False |
(Boolean) ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1 |
datera_num_replicas = 3 |
(Integer) DEPRECATED: Number of replicas to create of an inode. |
Modify the /etc/cinder/cinder.conf
file for Block Storage service.
[DEFAULT]
# ...
enabled_backends = datera
# ...
default_volume_type = datera
san_ip
can
be either the Datera Management Network VIP or one of the Datera iSCSI
Access Network VIPs depending on the network segregation requirements:volume_driver = cinder.volume.drivers.datera.DateraDriver
san_ip = <IP_ADDR> # The OOB Management IP of the cluster
san_login = admin # Your cluster admin login
san_password = password # Your cluster admin password
san_is_local = true
datera_num_replicas = 3 # Number of replicas to use for volume
san_ip
:$ ping -c 4 <san_IP>
cinder-volume
services:$ service cinder-volume restart
QoS support for the Datera drivers includes the ability to set the following capabilities in QoS Specs
# Create qos spec
$ cinder qos-create DateraBronze total_iops_max=1000 \
total_bandwidth_max=2000
# Associate qos-spec with volume type
$ cinder qos-associate <qos-spec-id> <volume-type-id>
# Add additional qos values or update existing ones
$ cinder qos-key <qos-spec-id> set read_bandwidth_max=500
The following configuration is for 3.X Linux kernels, some parameters in
different Linux distributions may be different. Make the following changes
in the multipath.conf
file:
defaults {
checker_timer 5
}
devices {
device {
vendor "DATERA"
product "IBLOCK"
getuid_callout "/lib/udev/scsi_id --whitelisted –
replace-whitespace --page=0x80 --device=/dev/%n"
path_grouping_policy group_by_prio
path_checker tur
prio alua
path_selector "queue-length 0"
hardware_handler "1 alua"
failback 5
}
}
blacklist {
device {
vendor ".*"
product ".*"
}
}
blacklist_exceptions {
device {
vendor "DATERA.*"
product "IBLOCK.*"
}
}
The Dell EqualLogic volume driver interacts with configured EqualLogic arrays and supports various operations.
The OpenStack Block Storage service supports:
The Dell EqualLogic volume driver’s ability to access the EqualLogic Group is
dependent upon the generic block storage driver’s SSH settings in the
/etc/cinder/cinder.conf
file (see
Block Storage service sample configuration files for reference).
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
eqlx_chap_login = admin |
(String) Existing CHAP account name. Note that this option is deprecated in favour of “chap_username” as specified in cinder/volume/driver.py and will be removed in next release. |
eqlx_chap_password = password |
(String) Password for specified CHAP account name. Note that this option is deprecated in favour of “chap_password” as specified in cinder/volume/driver.py and will be removed in the next release |
eqlx_cli_max_retries = 5 |
(Integer) Maximum retry count for reconnection. Default is 5. |
eqlx_cli_timeout = 30 |
(Integer) Timeout for the Group Manager cli command execution. Default is 30. Note that this option is deprecated in favour of “ssh_conn_timeout” as specified in cinder/volume/drivers/san/san.py and will be removed in M release. |
eqlx_group_name = group-0 |
(String) Group name to use for creating volumes. Defaults to “group-0”. |
eqlx_pool = default |
(String) Pool in which volumes will be created. Defaults to “default”. |
eqlx_use_chap = False |
(Boolean) Use CHAP authentication for targets. Note that this option is deprecated in favour of “use_chap_auth” as specified in cinder/volume/driver.py and will be removed in next release. |
The following sample /etc/cinder/cinder.conf
configuration lists the
relevant settings for a typical Block Storage service using a single
Dell EqualLogic Group:
[DEFAULT]
# Required settings
volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
san_ip = IP_EQLX
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL
# Optional settings
san_thin_provision = true|false
eqlx_use_chap = true|false
eqlx_chap_login = EQLX_UNAME
eqlx_chap_password = EQLX_PW
eqlx_cli_max_retries = 5
san_ssh_port = 22
ssh_conn_timeout = 30
san_private_key = SAN_KEY_PATH
ssh_min_pool_conn = 1
ssh_max_pool_conn = 5
In this example, replace the following variables accordingly:
san_ip
. Default user name is grpadmin
.san_private_key
is set. Default password is password
.group-0
.default
. This option cannot be used
for multiple pools utilized by the Block Storage service on a single
Dell EqualLogic Group.eqlx_use_chap
is set to true
. Default account name is
chapadmin
.san_password
is set. There is no default value.In addition, enable thin provisioning for SAN volumes using the default
san_thin_provision = true
setting.
The following example shows the typical configuration for a Block Storage service that uses two Dell EqualLogic back ends:
enabled_backends = backend1,backend2
san_ssh_port = 22
ssh_conn_timeout = 30
san_thin_provision = true
[backend1]
volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name = backend1
san_ip = IP_EQLX1
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL
[backend2]
volume_driver = cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
volume_backend_name = backend2
san_ip = IP_EQLX2
san_login = SAN_UNAME
san_password = SAN_PW
eqlx_group_name = EQLX_GROUP
eqlx_pool = EQLX_POOL
In this example:
san_thin_provision = true
). This is recommended when setting up
Dell EqualLogic back ends.[backend1]
and
[backend2]
) has the same required settings as a single back-end
configuration, with the addition of volume_backend_name
.san_ssh_port
option is set to its default value, 22. This
option sets the port used for SSH.ssh_conn_timeout
option is also set to its default value, 30.
This option sets the timeout in seconds for CLI commands over SSH.IP_EQLX1
and IP_EQLX2
refer to the IP addresses used to
reach the Dell EqualLogic Group of backend1
and backend2
through SSH, respectively.For information on configuring multiple back ends, see Configure a multiple-storage back end.
The Dell Storage Center volume driver interacts with configured Storage Center arrays.
The Dell Storage Center driver manages Storage Center arrays through
the Dell Storage Manager (DSM). DSM connection settings and Storage
Center options are defined in the cinder.conf
file.
Prerequisite: Dell Storage Manager 2015 R1 or later must be used.
The Dell Storage Center volume driver provides the following Cinder volume operations:
Volume type extra specs can be used to enable a variety of Dell Storage Center options. Selecting Storage Profiles, Replay Profiles, enabling replication, replication options including Live Volume and Active Replay replication.
Storage Profiles control how Storage Center manages volume data. For a given volume, the selected Storage Profile dictates which disk tier accepts initial writes, as well as how data progression moves data between tiers to balance performance and cost. Predefined Storage Profiles are the most effective way to manage data in Storage Center.
By default, if no Storage Profile is specified in the volume extra
specs, the default Storage Profile for the user account configured for
the Block Storage driver is used. The extra spec key
storagetype:storageprofile
with the value of the name of the Storage
Profile on the Storage Center can be set to allow to use Storage
Profiles other than the default.
For ease of use from the command line, spaces in Storage Profile names
are ignored. As an example, here is how to define two volume types using
the High Priority
and Low Priority
Storage Profiles:
$ cinder type-create "GoldVolumeType"
$ cinder type-key "GoldVolumeType" set storagetype:storageprofile=highpriority
$ cinder type-create "BronzeVolumeType"
$ cinder type-key "BronzeVolumeType" set storagetype:storageprofile=lowpriority
Replay Profiles control how often the Storage Center takes a replay of a
given volume and how long those replays are kept. The default profile is
the daily
profile that sets the replay to occur once a day and to
persist for one week.
The extra spec key storagetype:replayprofiles
with the value of the
name of the Replay Profile or profiles on the Storage Center can be set
to allow to use Replay Profiles other than the default daily
profile.
As an example, here is how to define a volume type using the hourly
Replay Profile and another specifying both hourly
and the default
daily
profile:
$ cinder type-create "HourlyType"
$ cinder type-key "HourlyType" set storagetype:replayprofile=hourly
$ cinder type-create "HourlyAndDailyType"
$ cinder type-key "HourlyAndDailyType" set storagetype:replayprofiles=hourly,daily
Note the comma separated string for the HourlyAndDailyType
.
Replication for a given volume type is enabled via the extra spec
replication_enabled
.
To create a volume type that specifies only replication enabled back ends:
$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
Extra specs can be used to configure replication. In addition to the Replay
Profiles above, replication:activereplay
can be set to enable replication
of the volume’s active replay. And the replication type can be changed to
synchronous via the replication_type
extra spec can be set.
To create a volume type that enables replication of the active replay:
$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication:activereplay='<is> True'
To create a volume type that enables synchronous replication :
$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication_type='<in> sync'
To create a volume type that enables replication using Live Volume:
$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
$ cinder type-key "ReplicationType" set replication:livevolume='<is> True'
Use the following instructions to update the configuration file for iSCSI:
default_volume_type = delliscsi
enabled_backends = delliscsi
[delliscsi]
# Name to give this storage back-end
volume_backend_name = delliscsi
# The iSCSI driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_iscsi.DellStorageCenterISCSIDriver
# IP address of DSM
san_ip = 172.23.8.101
# DSM user name
san_login = Admin
# DSM password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702
# ==Optional settings==
# The DSM API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
Use the following instructions to update the configuration file for fibre channel:
default_volume_type = dellfc
enabled_backends = dellfc
[dellfc]
# Name to give this storage back-end
volume_backend_name = dellfc
# The FC driver to load
volume_driver = cinder.volume.drivers.dell.dell_storagecenter_fc.DellStorageCenterFCDriver
# IP address of the DSM
san_ip = 172.23.8.101
# DSM user name
san_login = Admin
# DSM password
san_password = secret
# The Storage Center serial number to use
dell_sc_ssn = 64702
# ==Optional settings==
# The DSM API port
dell_sc_api_port = 3033
# Server folder to place new server definitions
dell_sc_server_folder = devstacksrv
# Volume folder to place created volumes
dell_sc_volume_folder = devstackvol/Cinder
It is possible to specify a secondary DSM to use in case the primary DSM fails.
Configuration is done through the cinder.conf. Both DSMs have to be configured to manage the same set of Storage Centers for this backend. That means the dell_sc_ssn and any Storage Centers used for replication or Live Volume.
Add network and credential information to the backend to enable Dual DSM.
[dell]
# The IP address and port of the secondary DSM.
secondary_san_ip = 192.168.0.102
secondary_sc_api_port = 3033
# Specify credentials for the secondary DSM.
secondary_san_login = Admin
secondary_san_password = secret
The driver will use the primary until a failure. At that point it will attempt to use the secondary. It will continue to use the secondary until the volume service is restarted or the secondary fails at which point it will attempt to use the primary.
Add the following to the back-end specification to specify another Storage Center to replicate to.
[dell]
replication_device = target_device_id: 65495, qosnode: cinderqos
The target_device_id
is the SSN of the remote Storage Center and the
qosnode
is the QoS Node setup between the two Storage Centers.
Note that more than one replication_device
line can be added. This will
slow things down, however.
A volume is only replicated if the volume is of a volume-type that has
the extra spec replication_enabled
set to <is> True
.
This driver supports both standard replication and Live Volume (if supported and licensed). The main difference is that a VM attached to a Live Volume is mapped to both Storage Centers. In the case of a failure of the primary Live Volume still requires a failover-host to move control of the volume to the second controller.
Existing mappings should work and not require the instance to be remapped but it might need to be rebooted.
Live Volume is more resource intensive than replication. One should be sure to plan accordingly.
The failover-host command is designed for the case where the primary system is not coming back. If it has been executed and the primary has been restored it is possible to attempt a failback.
Simply specify default as the backend_id.
$ cinder failover-host cinder@delliscsi --backend_id default
Non trivial heavy lifting is done by this command. It attempts to recover best it can but if things have diverged to far it can only do so much. It is also a one time only command so do not reboot or restart the service in the middle of it.
Failover and failback are significant operations under OpenStack Cinder. Be sure to consult with support before attempting.
This option allows one to set a default Server OS type to use when creating a server definition on the Dell Storage Center.
When attaching a volume to a node the Dell Storage Center driver creates a server definition on the storage array. This defition includes a Server OS type. The type used by the Dell Storage Center cinder driver is “Red Hat Linux 6.x”. This is a modern operating system definition that supports all the features of an OpenStack node.
Add the following to the back-end specification to specify the Server OS to use when creating a server definition. The server type used must come from the drop down list in the DSM.
[dell]
default_server_os = 'Red Hat Linux 7.x'
Note that this server definition is created once. Changing this setting after the fact will not change an existing definition. The selected Server OS does not have to match the actual OS used on the node.
The following table contains the configuration options specific to the Dell Storage Center volume driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dell_sc_api_port = 3033 |
(Port number) Dell API port |
dell_sc_server_folder = openstack |
(String) Name of the server folder to use on the Storage Center |
dell_sc_ssn = 64702 |
(Integer) Storage Center System Serial Number |
dell_sc_verify_cert = False |
(Boolean) Enable HTTPS SC certificate verification |
dell_sc_volume_folder = openstack |
(String) Name of the volume folder to use on the Storage Center |
dell_server_os = Red Hat Linux 6.x |
(String) Server OS type to use when creating a new server on the Storage Center. |
excluded_domain_ip = None |
(Unknown) Domain IP to be excluded from iSCSI returns. |
secondary_san_ip = |
(String) IP address of secondary DSM controller |
secondary_san_login = Admin |
(String) Secondary DSM user name |
secondary_san_password = |
(String) Secondary DSM user password name |
secondary_sc_api_port = 3033 |
(Port number) Secondary Dell API port |
The DotHillFCDriver
and DotHillISCSIDriver
volume drivers allow
Dot Hill arrays to be used for block storage in OpenStack deployments.
To use the Dot Hill drivers, the following are required:
Verify that the array can be managed via an HTTPS connection. HTTP can
also be used if dothill_api_protocol=http
is placed into the
appropriate sections of the cinder.conf
file.
Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.
If you plan to use vdisks instead of virtual pools, create or identify one or more vdisks to be used for OpenStack storage; typically this will mean creating or setting aside one disk group for each of the A and B controllers.
Edit the cinder.conf
file to define an storage back-end entry for
each storage pool on the array that will be managed by OpenStack. Each
entry consists of a unique section name, surrounded by square brackets,
followed by options specified in key=value
format.
dothill_backend_name
value specifies the name of the storage
pool or vdisk on the array.volume_backend_name
option value can be a unique value, if
you wish to be able to assign volumes to a specific storage pool on
the array, or a name that is shared among multiple storage pools to
let the volume scheduler choose where new volumes are allocated.manage
privileges; and the iSCSI IP
addresses for the array if using the iSCSI transport protocol.In the examples below, two back ends are defined, one for pool A and one
for pool B, and a common volume_backend_name
is used so that a
single volume type definition can be used to allocate volumes from both
pools.
iSCSI example back-end entries
[pool-a]
dothill_backend_name = A
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
dothill_iscsi_ips = 10.2.3.4,10.2.3.5
[pool-b]
dothill_backend_name = B
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_iscsi.DotHillISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
dothill_iscsi_ips = 10.2.3.4,10.2.3.5
Fibre Channel example back-end entries
[pool-a]
dothill_backend_name = A
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
[pool-b]
dothill_backend_name = B
volume_backend_name = dothill-array
volume_driver = cinder.volume.drivers.dothill.dothill_fc.DotHillFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
If any volume_backend_name
value refers to a vdisk rather than a
virtual pool, add an additional statement
dothill_backend_type = linear
to that back-end entry.
If HTTPS is not enabled in the array, include
dothill_api_protocol = http
in each of the back-end definitions.
If HTTPS is enabled, you can enable certificate verification with the
option dothill_verify_certificate=True
. You may also use the
dothill_verify_certificate_path
parameter to specify the path to a
CA_BUNDLE file containing CAs other than those in the default list.
Modify the [DEFAULT]
section of the cinder.conf
file to add an
enabled_backends
parameter specifying the back-end entries you added,
and a default_volume_type
parameter specifying the name of a volume
type that you will create in the next step.
Example of [DEFAULT] section changes
[DEFAULT]
...
enabled_backends = pool-a,pool-b
default_volume_type = dothill
...
Create a new volume type for each distinct volume_backend_name
value
that you added to cinder.conf. The example below assumes that the same
volume_backend_name=dothill-array
option was specified in all of the
entries, and specifies that the volume type dothill
can be used to
allocate volumes from any of them.
Example of creating a volume type
$ cinder type-create dothill
$ cinder type-key dothill set volume_backend_name=dothill-array
After modifying cinder.conf
, restart the cinder-volume service.
The following table contains the configuration options that are specific to the Dot Hill drivers.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dothill_api_protocol = https |
(String) DotHill API interface protocol. |
dothill_backend_name = A |
(String) Pool or Vdisk name to use for volume creation. |
dothill_backend_type = virtual |
(String) linear (for Vdisk) or virtual (for Pool). |
dothill_iscsi_ips = |
(List) List of comma-separated target iSCSI IP addresses. |
dothill_verify_certificate = False |
(Boolean) Whether to verify DotHill array SSL certificate. |
dothill_verify_certificate_path = None |
(String) DotHill array SSL certificate path. |
ScaleIO is a software-only solution that uses existing servers’ local disks and LAN to create a virtual SAN that has all of the benefits of external storage, but at a fraction of the cost and complexity. Using the driver, Block Storage hosts can connect to a ScaleIO Storage cluster.
This section explains how to configure and connect the block storage nodes to a ScaleIO storage cluster.
ScaleIO version | Supported Linux operating systems |
---|---|
1.32 | CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12 |
2.0 | CentOS 6.x, CentOS 7.x, SLES 11 SP3, SLES 12, Ubuntu 14.04 |
Note
Ubuntu users must follow the specific instructions in the ScaleIO deployment guide for Ubuntu environments. See the Deploying on Ubuntu servers section in ScaleIO Deployment Guide. See Official documentation.
To find the ScaleIO documentation:
QoS support for the ScaleIO driver includes the ability to set the
following capabilities in the Block Storage API
cinder.api.contrib.qos_specs_manage
QoS specs extension module:
maxIOPS
maxIOPSperGB
maxBWS
maxBWSperGB
The QoS keys above must be created and associated with a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:
$ cinder help qos-create
$ cinder help qos-key
$ cinder help qos-associate
maxIOPS
maxIOPSperGB
maxBWS
maxBWSperGB
The driver always chooses the minimum between the QoS keys value
and the relevant calculated value of maxIOPSperGB
or maxBWSperGB
.
Since the limits are per SDC, they will be applied after the volume is attached to an instance, and thus to a compute node/SDC.
The Block Storage driver supports creation of thin-provisioned and thick-provisioned volumes. The provisioning type settings can be added as an extra specification of the volume type, as follows:
provisioning:type = thin\thick
The old specification: sio:provisioning_type
is deprecated.
Configure the oversubscription ratio by adding the following parameter under the seperate section for ScaleIO:
sio_max_over_subscription_ratio = OVER_SUBSCRIPTION_RATIO
Note
The default value for sio_max_over_subscription_ratio
is 10.0.
Oversubscription is calculated correctly by the Block Storage service
only if the extra specification provisioning:type
appears in the volume type regardless to the default provisioning type.
Maximum oversubscription value supported for ScaleIO is 10.0.
If provisioning type settings are not specified in the volume type,
the default value is set according to the san_thin_provision
option in the configuration file. The default provisioning type
will be thin
if the option is not specified in the configuration
file. To set the default provisioning type thick
, set
the san_thin_provision
option to false
in the configuration file, as follows:
san_thin_provision = false
The configuration file is usually located in
/etc/cinder/cinder.conf
.
For a configuration example, see:
cinder.conf.
Edit the cinder.conf
file by adding the configuration below under
the [DEFAULT]
section of the file in case of a single back end, or
under a separate section in case of multiple back ends (for example
[ScaleIO]). The configuration file is usually located at
/etc/cinder/cinder.conf
.
For a configuration example, refer to the example cinder.conf .
Configure the driver name by adding the following parameter:
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
The ScaleIO Meta Data Manager monitors and maintains the available resources and permissions.
To retrieve the MDM server IP address, use the drv_cfg --query_mdms command.
Configure the MDM server IP address by adding the following parameter:
san_ip = ScaleIO GATEWAY IP
ScaleIO allows multiple Protection Domains (groups of SDSs that provide backup for each other).
To retrieve the available Protection Domains, use the command scli --query_all and search for the Protection Domains section.
Configure the Protection Domain for newly created volumes by adding the following parameter:
sio_protection_domain_name = ScaleIO Protection Domain
A ScaleIO Storage Pool is a set of physical devices in a Protection Domain.
To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.
Configure the Storage Pool for newly created volumes by adding the following parameter:
sio_storage_pool_name = ScaleIO Storage Pool
Multiple Storage Pools and Protection Domains can be listed for use by the virtual machines.
To retrieve the available Storage Pools, use the command scli --query_all and search for available Storage Pools.
Configure the available Storage Pools by adding the following parameter:
sio_storage_pools = Comma-separated list of protection domain:storage pool name
Block Storage requires a ScaleIO user with administrative privileges. ScaleIO recommends creating a dedicated OpenStack user account that has an administrative user role.
Refer to the ScaleIO User Guide for details on user account management.
Configure the user credentials by adding the following parameters:
san_login = ScaleIO username
san_password = ScaleIO password
Configuring multiple storage back ends allows you to create several back-end storage solutions that serve the same Compute resources.
When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.
cinder.conf example file
You can update the cinder.conf
file by editing the necessary
parameters as follows:
[Default]
enabled_backends = scaleio
[scaleio]
volume_driver = cinder.volume.drivers.emc.scaleio.ScaleIODriver
volume_backend_name = scaleio
san_ip = GATEWAY_IP
sio_protection_domain_name = Default_domain
sio_storage_pool_name = Default_pool
sio_storage_pools = Domain1:Pool1,Domain2:Pool2
san_login = SIO_USER
san_password = SIO_PASSWD
san_thin_provision = false
The ScaleIO driver supports these configuration options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
sio_max_over_subscription_ratio = 10.0 |
(Floating point) max_over_subscription_ratio setting for the ScaleIO driver. This replaces the general max_over_subscription_ratio which has no effect in this driver.Maximum value allowed for ScaleIO is 10.0. |
sio_protection_domain_id = None |
(String) Protection Domain ID. |
sio_protection_domain_name = None |
(String) Protection Domain name. |
sio_rest_server_port = 443 |
(String) REST server port. |
sio_round_volume_capacity = True |
(Boolean) Round up volume capacity. |
sio_server_certificate_path = None |
(String) Server certificate path. |
sio_storage_pool_id = None |
(String) Storage Pool ID. |
sio_storage_pool_name = None |
(String) Storage Pool name. |
sio_storage_pools = None |
(String) Storage Pools. |
sio_unmap_volume_before_deletion = False |
(Boolean) Unmap volume before deletion. |
sio_verify_server_certificate = False |
(Boolean) Verify server certificate. |
The EMC VMAX drivers, EMCVMAXISCSIDriver
and EMCVMAXFCDriver
, support
the use of EMC VMAX storage arrays with Block Storage. They both provide
equivalent functions and differ only in support for their respective host
attachment methods.
The drivers perform volume operations by communicating with the back-end VMAX
storage. It uses a CIM client in Python called PyWBEM
to perform CIM
operations over HTTP.
The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back end for VMAX storage operations.
The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports the VMAX storage system.
The Cinder driver supports both VMAX-2 and VMAX-3 series.
For VMAX-2 series, SMI-S version V4.6.2.29 (Solutions Enabler 7.6.2.67) or Solutions Enabler 8.1.2 is required.
For VMAX-3 series, Solutions Enabler 8.3 is required. This is SSL only.
Refer to section below SSL support
.
When installing Solutions Enabler, make sure you explicitly add the SMI-S component.
You can download SMI-S from the EMC’s support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.
Ensure that there is only one SMI-S (ECOM) server active on the same VMAX array.
There are five Software Suites available for the VMAX All Flash and Hybrid:
Openstack requires the Advanced Suite and the Local Replication Suite or the Total Productivity Pack (it includes the Advanced Suite and the Local Replication Suite) for the VMAX All Flash and Hybrid.
There are four bundled Software Suites for the VMAX2:
OpenStack requires the Advanced Software Bundle for the VMAX2.
or
The VMAX2 Optional Software are:
OpenStack requires TimeFinder for VMAX10K for the VMAX2.
Each are licensed separately. For further details on how to get the relevant license(s), reference eLicensing Support below.
To activate your entitlements and obtain your VMAX license files, visit the Service Center on https://support.emc.com, as directed on your License Authorization Code (LAC) letter emailed to you.
For help with missing or incorrect entitlements after activation (that is, expected functionality remains unavailable because it is not licensed), contact your EMC account representative or authorized reseller.
For help with any errors applying license files through Solutions Enabler, contact the EMC Customer Support Center.
If you are missing a LAC letter or require further instructions on
activating your licenses through the Online Support site, contact EMC’s
worldwide Licensing team at licensing@emc.com
or call:
North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and follow the voice prompts.
EMEA: +353 (0) 21 4879862 and follow the voice prompts.
VMAX drivers support these operations:
VMAX drivers also support the following features:
VMAX2:
VMAX All Flash and Hybrid:
Note
VMAX All Flash array with Solutions Enabler 8.3 have compression enabled by default when associated with Diamond Service Level. This means volumes added to any newly created storage groups will be compressed.
Pywbem Version | Ubuntu14.04(LTS),Ubuntu16.04(LTS), Red Hat Enterprise Linux, CentOS and Fedora | |||
---|---|---|---|---|
Python2 | Python3 | |||
pip | Native | pip | Native | |
0.9.0 | No | N/A | Yes | N/A |
0.8.4 | No | N/A | Yes | N/A |
0.7.0 | No | Yes | No | Yes |
Note
On Python2, use the updated distro version, for example:
# apt-get install python-pywbem
Note
On Python3, use the official pywbem version (V0.9.0 or v0.8.4).
Install the python-pywbem
package for your distribution.
On Ubuntu:
# apt-get install python-pywbem
On openSUSE:
# zypper install python-pywbem
On Red Hat Enterprise Linux, CentOS, and Fedora:
# yum install pywbem
Install iSCSI Utilities (for iSCSI drivers only).
Download and configure the Cinder node as an iSCSI initiator.
Install the open-iscsi
package.
On Ubuntu:
# apt-get install open-iscsi
On openSUSE:
# zypper install open-iscsi
On Red Hat Enterprise Linux, CentOS, and Fedora:
# yum install scsi-target-utils.x86_64
Enable the iSCSI driver to start automatically.
Download SMI-S from support.emc.com
and install it. Add your VMAX arrays
to SMI-S.
You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S is ESX only. See the EMC SMI-S Provider release notes for more information on supported platforms and installation instructions.
Note
You must discover storage arrays on the SMI-S server before you can use the VMAX drivers. Follow instructions in the SMI-S release notes.
SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin
on Linux and
C:\Program Files\EMC\ECIM\ECOM\bin
on Windows. After you install and
configure SMI-S, go to that directory and type TestSmiProvider.exe
for windows and ./TestSmiProvider
for linux
Use addsys
in TestSmiProvider
to add an array. Use dv
and
examine the output after the array is added. Make sure that the arrays are
recognized by the SMI-S server before using the EMC VMAX drivers.
Configure Block Storage
Add the following entries to /etc/cinder/cinder.conf
:
enabled_backends = CONF_GROUP_ISCSI, CONF_GROUP_FC
[CONF_GROUP_ISCSI]
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
volume_backend_name = ISCSI_backend
[CONF_GROUP_FC]
volume_driver = cinder.volume.drivers.emc.emc_vmax_fc.EMCVMAXFCDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config_CONF_GROUP_FC.xml
volume_backend_name = FC_backend
In this example, two back-end configuration groups are enabled:
CONF_GROUP_ISCSI
and CONF_GROUP_FC
. Each configuration group has a
section describing unique parameters for connections, drivers, the
volume_backend_name
, and the name of the EMC-specific configuration file
containing additional settings. Note that the file name is in the format
/etc/cinder/cinder_emc_config_[confGroup].xml
.
Once the cinder.conf
and EMC-specific configuration files have been
created, cinder commands need to be issued in order to create and
associate OpenStack volume types with the declared volume_backend_names
:
$ cinder type-create VMAX_ISCSI
$ cinder type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
$ cinder type-create VMAX_FC
$ cinder type-key VMAX_FC set volume_backend_name=FC_backend
By issuing these commands, the Block Storage volume type VMAX_ISCSI
is
associated with the ISCSI_backend
, and the type VMAX_FC
is
associated with the FC_backend
.
Create the /etc/cinder/cinder_emc_config_CONF_GROUP_ISCSI.xml
file.
You do not need to restart the service for this change.
Add the following lines to the XML file:
<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
<EcomServerIp>1.1.1.1</EcomServerIp>
<EcomServerPort>00</EcomServerPort>
<EcomUserName>user1</EcomUserName>
<EcomPassword>password1</EcomPassword>
<PortGroups>
<PortGroup>OS-PORTGROUP1-PG</PortGroup>
<PortGroup>OS-PORTGROUP2-PG</PortGroup>
</PortGroups>
<Array>111111111111</Array>
<Pool>FC_GOLD1</Pool>
<FastPolicy>GOLD1</FastPolicy>
</EMC>
<?xml version="1.0" encoding="UTF-8" ?>
<EMC>
<EcomServerIp>1.1.1.1</EcomServerIp>
<EcomServerPort>00</EcomServerPort>
<EcomUserName>user1</EcomUserName>
<EcomPassword>password1</EcomPassword>
<PortGroups>
<PortGroup>OS-PORTGROUP1-PG</PortGroup>
<PortGroup>OS-PORTGROUP2-PG</PortGroup>
</PortGroups>
<Array>111111111111</Array>
<Pool>SRP_1</Pool>
<SLO>Gold</SLO>
<Workload>OLTP</Workload>
</EMC>
Where:
EcomServerIp
EcomServerPort
EcomUserName
and EcomPassword
PortGroups
Array
Pool
FastPolicy
FastPolicy
tag means FAST is not enabled on the provided
storage pool.SLO
SLO
tag means that non FAST storage groups will be created instead
(storage groups not associated with any service level).Workload
Workload
tag means the latency range will be the widest for its SLO type.Zone Manager is required when there is a fabric between the host and array. This is necessary for larger configurations where pre-zoning would be too complex and open-zoning would raise security concerns.
iscsi-initiator-utils
package is installed on all Compute
nodes.Note
You can only ping the VMAX iSCSI target ports when there is a valid masking view. An attach operation creates this masking view.
Masking views are dynamically created by the VMAX FC and iSCSI drivers using
the following naming conventions. [protocol]
is either I
for volumes
attached over iSCSI or F
for volumes attached over Fiber Channel.
VMAX2
OS-[shortHostName]-[poolName]-[protocol]-MV
VMAX2 (where FAST policy is used)
OS-[shortHostName]-[fastPolicy]-[protocol]-MV
VMAX All Flash and Hybrid
OS-[shortHostName]-[SRP]-[SLO]-[workload]-[protocol]-MV
For each host that is attached to VMAX volumes using the drivers, an initiator
group is created or re-used (per attachment type). All initiators of the
appropriate type known for that host are included in the group. At each new
attach volume operation, the VMAX driver retrieves the initiators (either WWNNs
or IQNs) from OpenStack and adds or updates the contents of the Initiator Group
as required. Names are of the following format. [protocol]
is either I
for volumes attached over iSCSI or F
for volumes attached over Fiber
Channel.
OS-[shortHostName]-[protocol]-IG
Note
Hosts attaching to OpenStack managed VMAX storage cannot also attach to storage on the same VMAX that are not managed by OpenStack.
VMAX array FA ports to be used in a new masking view are chosen from the list provided in the EMC configuration file.
As volumes are attached to a host, they are either added to an existing storage
group (if it exists) or a new storage group is created and the volume is then
added. Storage groups contain volumes created from a pool (either single-pool
or FAST-controlled), attached to a single host, over a single connection type
(iSCSI or FC). [protocol]
is either I
for volumes attached over iSCSI
or F
for volumes attached over Fiber Channel.
VMAX2
OS-[shortHostName]-[poolName]-[protocol]-SG
VMAX2 (where FAST policy is used)
OS-[shortHostName]-[fastPolicy]-[protocol]-SG
VMAX All Flash and Hybrid
OS-[shortHostName]-[SRP]-[SLO]-[Workload]-[protocol]-SG
In order to support later expansion of created volumes, the VMAX Block Storage drivers create concatenated volumes as the default layout. If later expansion is not required, users can opt to create striped volumes in order to optimize I/O performance.
Below is an example of how to create striped volumes. First, create a volume
type. Then define the extra spec for the volume type
storagetype:stripecount
representing the number of meta members in the
striped volume. The example below means that each volume created under the
GoldStriped
volume type will be striped and made up of 4 meta members.
$ cinder type-create GoldStriped
$ cinder type-key GoldStriped set volume_backend_name=GOLD_BACKEND
$ cinder type-key GoldStriped set storagetype:stripecount=4
Note
The ECOM component in Solutions Enabler enforces SSL in 8.3. By default, this port is 5989.
Get the CA certificate of the ECOM server:
# openssl s_client -showcerts -connect <ecom_hostname>.lss.emc.com:5989 </dev/null
Copy the pem file to the system certificate directory:
# cp <ecom_hostname>.lss.emc.com.pem /usr/share/ca-certificates/<ecom_hostname>.lss.emc.com.crt
Update CA certificate database with the following commands (accept defaults):
# dpkg-reconfigure ca-certificates
# dpkg-reconfigure ca-certificates
Update /etc/cinder/cinder.conf
to reflect SSL functionality by
adding the following to the back end block:
driver_ssl_cert_verify = False
driver_use_ssl = True
driver_ssl_cert_path = /opt/stack/<ecom_hostname>.lss.emc.com.pem (Optional if Step 3 and 4 are skipped)
Update EcomServerIp to ECOM host name and EcomServerPort to secure port
(5989 by default) in /etc/cinder/cinder_emc_config_<conf_group>.xml
.
Oversubscription support requires the /etc/cinder/cinder.conf
to be
updated with two additional tags max_over_subscription_ratio
and
reserved_percentage
. In the sample below, the value of 2.0 for
max_over_subscription_ratio
means that the pools in oversubscribed by a
factor of 2, or 200% oversubscribed. The reserved_percentage
is the high
water mark where by the physical remaining space cannot be exceeded.
For example, if there is only 4% of physical space left and the reserve
percentage is 5, the free space will equate to zero. This is a safety
mechanism to prevent a scenario where a provisioning request fails due to
insufficient raw space.
The parameter max_over_subscription_ratio
and reserved_percentage
are
optional.
To set these parameter go to the configuration group of the volume type in
/etc/cinder/cinder.conf
.
[VMAX_ISCSI_SILVER]
cinder_emc_config_file = /etc/cinder/cinder_emc_config_VMAX_ISCSI_SILVER.xml
volume_driver = cinder.volume.drivers.emc.emc_vmax_iscsi.EMCVMAXISCSIDriver
volume_backend_name = VMAX_ISCSI_SILVER
max_over_subscription_ratio = 2.0
reserved_percentage = 10
For the second iteration of over subscription, take into account the EMCMaxSubscriptionPercent property on the pool. This value is the highest that a pool can be oversubscribed.
EMCMaxSubscriptionPercent
is 200 and the user defined
max_over_subscription_ratio
is 2.5, the latter is ignored.
Oversubscription is 200%.
EMCMaxSubscriptionPercent
is 200 and the user defined
max_over_subscription_ratio
is 1.5, 1.5 equates to 150% and is less than
the value set on the pool. Oversubscription is 150%.
EMCMaxSubscriptionPercent
is 0. This means there is no upper limit on the
pool. The user defined max_over_subscription_ratio
is 1.5.
Oversubscription is 150%.
EMCMaxSubscriptionPercent
is 0. max_over_subscription_ratio
is not
set by the user. We recommend to default to upper limit, this is 150%.
Note
If FAST is set and multiple pools are associated with a FAST policy, then the same rules apply. The difference is, the TotalManagedSpace and EMCSubscribedCapacity for each pool associated with the FAST policy are aggregated.
EMCMaxSubscriptionPercent
is 200 on one pool. It is 300 on another pool.
The user defined max_over_subscription_ratio
is 2.5. Oversubscription is
200% on the first pool and 250% on the other.
Quality of service(QoS) has traditionally been associated with network bandwidth usage. Network administrators set limitations on certain networks in terms of bandwidth usage for clients. This enables them to provide a tiered level of service based on cost. The cinder QoS offers similar functionality based on volume type setting limits on host storage bandwidth per service offering. Each volume type is tied to specific QoS attributes that are unique to each storage vendor. The VMAX plugin offers limits via the following attributes:
Prerequisites - VMAX
Key | Value |
---|---|
maxIOPS | 4000 |
maxMBPS | 4000 |
DistributionType | Always |
Create QoS Specs with the prerequisite values above:
cinder qos-create <name> <key=value> [<key=value> ...]
$ cinder qos-create silver maxIOPS=4000 maxMBPS=4000 DistributionType=Always
Associate QoS specs with specified volume type:
cinder qos-associate <qos_specs id> <volume_type_id>
$ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
Create volume with the volume type indicated above:
cinder create [--name <name>] [--volume-type <volume-type>] size
$ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
Outcome - VMAX (storage group)
Outcome - Block Storage (cinder)
Volume is created against volume type and QoS is enforced with the parameters above.
Prerequisites - VMAX
Key | Value |
---|---|
maxIOPS | 4000 |
maxMBPS | 4000 |
DistributionType | Always |
Create QoS specifications with the prerequisite values above:
cinder qos-create <name> <key=value> [<key=value> ...]
$ cinder qos-create silver maxIOPS=4000 maxMBPS=4000 DistributionType=Always
Associate QoS specifications with specified volume type:
cinder qos-associate <qos_specs id> <volume_type_id>
$ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
Create volume with the volume type indicated above:
cinder create [--name <name>] [--volume-type <volume-type>] size
$ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
Outcome - VMAX (storage group)
Outcome - Block Storage (cinder)
Volume is created against volume type and QoS is enforced with the parameters above.
Prerequisites - VMAX
Key | Value |
---|---|
DistributionType | Always |
Create QoS specifications with the prerequisite values above:
cinder qos-create <name> <key=value> [<key=value> ...]
$ cinder qos-create silver DistributionType=Always
Associate QoS specifications with specified volume type:
cinder qos-associate <qos_specs id> <volume_type_id>
$ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
Create volume with the volume type indicated above:
cinder create [--name <name>] [--volume-type <volume-type>] size
$ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
Outcome - VMAX (storage group)
Outcome - Block Storage (cinder)
Volume is created against volume type and there is no QoS change.
Prerequisites - VMAX
Key | Value |
---|---|
DistributionType | OnFailure |
Create QoS specifications with the prerequisite values above:
cinder qos-create <name> <key=value> [<key=value> ...]
$ cinder qos-create silver DistributionType=OnFailure
Associate QoS specifications with specified volume type:
cinder qos-associate <qos_specs id> <volume_type_id>
$ cinder qos-associate 07767ad8-6170-4c71-abce-99e68702f051 224b1517-4a23-44b5-9035-8d9e2c18fb70
Create volume with the volume type indicated above:
cinder create [--name <name>] [--volume-type <volume-type>] size
$ cinder create --name test_volume --volume-type 224b1517-4a23-44b5-9035-8d9e2c18fb70 1
Outcome - VMAX (storage group)
Outcome - Block Storage (cinder)
Volume is created against volume type and there is no QoS change.
On Ubuntu:
# apt-get install open-iscsi #ensure iSCSI is installed
# apt-get install multipath-tools #multipath modules
# apt-get install sysfsutils sg3-utils #file system utilities
# apt-get install scsitools #SCSI tools
On openSUSE and SUSE Linux Enterprise Server:
# zipper install open-iscsi #ensure iSCSI is installed
# zipper install multipath-tools #multipath modules
# zipper install sysfsutils sg3-utils #file system utilities
# zipper install scsitools #SCSI tools
On Red Hat Enterprise Linux and CentOS:
# yum install iscsi-initiator-utils #ensure iSCSI is installed
# yum install device-mapper-multipath #multipath modules
# yum install sysfsutils sg3-utils #file system utilities
# yum install scsitools #SCSI tools
The multipath configuration file may be edited for better management and
performance. Log in as a privileged user and make the following changes to
/etc/multipath.conf
on the Compute (nova) node(s).
devices {
# Device attributed for EMC VMAX
device {
vendor "EMC"
product "SYMMETRIX"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id --page=pre-spc3-83 --whitelisted --device=/dev/%n"
path_selector "round-robin 0"
path_checker tur
features "0"
hardware_handler "0"
prio const
rr_weight uniform
no_path_retry 6
rr_min_io 1000
rr_min_io_rq 1
}
}
You may need to reboot the host after installing the MPIO tools or restart iSCSI and multipath services.
On Ubuntu:
# service open-iscsi restart
# service multipath-tools restart
On On openSUSE, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, and CentOS:
# systemctl restart open-iscsi
# systemctl restart multipath-tools
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1G 0 disk
..360000970000196701868533030303235 (dm-6) 252:6 0 1G 0 mpath
sdb 8:16 0 1G 0 disk
..360000970000196701868533030303235 (dm-6) 252:6 0 1G 0 mpath
vda 253:0 0 1T 0 disk
On Compute (nova) node, add the following flag in the [libvirt]
section of
/etc/nova/nova.conf
:
iscsi_use_multipath = True
On cinder controller node, set the multipath flag to true in
/etc/cinder.conf
:
use_multipath_for_image_xfer = True
Restart nova-compute
and cinder-volume
services after the change.
Create a 3GB VMAX volume.
Create an instance from image out of native LVM storage or from VMAX storage, for example, from a bootable volume
Attach the 3GB volume to the new instance:
$ multipath -ll
mpath102 (360000970000196700531533030383039) dm-3 EMC,SYMMETRIX
size=3G features='1 queue_if_no_path' hwhandler='0' wp=rw
'-+- policy='round-robin 0' prio=1 status=active
33:0:0:1 sdb 8:16 active ready running
'- 34:0:0:1 sdc 8:32 active ready running
Use the lsblk
command to see the multipath device:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:0 0 3G 0 disk
..360000970000196700531533030383039 (dm-6) 252:6 0 3G 0 mpath
sdc 8:16 0 3G 0 disk
..360000970000196700531533030383039 (dm-6) 252:6 0 3G 0 mpath
vda
Consistency Groups operations are performed through the CLI using v2 of the cinder API.
/etc/cinder/policy.json
may need to be updated to enable new API calls
for Consistency groups.
Note
Even though the terminology is ‘Consistency Group’ in OpenStack, a Storage Group is created on the VMAX, and should not be confused with a VMAX Consistency Group which is an SRDF construct. The Storage Group is not associated with any FAST policy.
Create a Consistency Group:
cinder --os-volume-api-version 2 consisgroup-create [--name <name>]
[--description <description>] [--availability-zone <availability-zone>]
<volume-types>
$ cinder --os-volume-api-version 2 consisgroup-create --name bronzeCG2 volume_type_1
List Consistency Groups:
cinder consisgroup-list [--all-tenants [<0|1>]]
$ cinder consisgroup-list
Show a Consistency Group:
cinder consisgroup-show <consistencygroup>
$ cinder consisgroup-show 38a604b7-06eb-4202-8651-dbf2610a0827
Update a consistency Group:
cinder consisgroup-update [--name <name>] [--description <description>]
[--add-volumes <uuid1,uuid2,......>] [--remove-volumes <uuid3,uuid4,......>]
<consistencygroup>
Change name:
$ cinder consisgroup-update --name updated_name 38a604b7-06eb-4202-8651-dbf2610a0827
Add volume(s) to a Consistency Group:
$ cinder consisgroup-update --add-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827
Delete volume(s) from a Consistency Group:
$ cinder consisgroup-update --remove-volumes af1ae89b-564b-4c7f-92d9-c54a2243a5fe 38a604b7-06eb-4202-8651-dbf2610a0827
Create a snapshot of a Consistency Group:
cinder cgsnapshot-create [--name <name>] [--description <description>]
<consistencygroup>
$ cinder cgsnapshot-create 618d962d-2917-4cca-a3ee-9699373e6625
Delete a snapshot of a Consistency Group:
cinder cgsnapshot-delete <cgsnapshot> [<cgsnapshot> ...]
$ cinder cgsnapshot-delete 618d962d-2917-4cca-a3ee-9699373e6625
Delete a Consistency Group:
cinder consisgroup-delete [--force] <consistencygroup> [<consistencygroup> ...]
$ cinder consisgroup-delete --force 618d962d-2917-4cca-a3ee-9699373e6625
Create a Consistency group from source (the source can only be a CG snapshot):
cinder consisgroup-create-from-src [--cgsnapshot <cgsnapshot>]
[--source-cg <source-cg>] [--name <name>] [--description <description>]
$ cinder consisgroup-create-from-src --source-cg 25dae184-1f25-412b-b8d7-9a25698fdb6d
You can also create a volume in a consistency group in one step:
cinder create [--consisgroup-id <consistencygroup-id>] [--name <name>]
[--description <description>] [--volume-type <volume-type>]
[--availability-zone <availability-zone>] <size>
$ cinder create --volume-type volume_type_1 --name cgBronzeVol --consisgroup-id 1de80c27-3b2f-47a6-91a7-e867cbe36462 1
VMAX Hybrid allows you to manage application storage by using Service Level Objectives (SLO) using policy based automation rather than the tiering in the VMAX2. The VMAX Hybrid comes with up to 6 SLO policies defined. Each has a set of workload characteristics that determine the drive types and mixes which will be used for the SLO. All storage in the VMAX Array is virtually provisioned, and all of the pools are created in containers called Storage Resource Pools (SRP). Typically there is only one SRP, however there can be more. Therefore, it is the same pool we will provision to but we can provide different SLO/Workload combinations.
The SLO capacity is retrieved by interfacing with Unisphere Workload Planner (WLP). If you do not set up this relationship then the capacity retrieved is that of the entire SRP. This can cause issues as it can never be an accurate representation of what storage is available for any given SLO and Workload combination.
Note
This should be set up ahead of time (allowing for several hours of data collection), so that the Unisphere for VMAX Performance Analyzer can collect rated metrics for each of the supported element types.
After enabling WLP you must then enable SMI-S to gain access to the WLP data:
Connect to the SMI-S Provider using TestSmiProvider.
Navigate to the Active menu.
Type reg
and enter the noted responses to the questions:
(EMCProvider:5989) ? reg
Current list of statistics Access Points: ?
Note: The current list will be empty if there are no existing Access Points.
Add Statistics Access Point {y|n} [n]: y
HostID [l2se0060.lss.emc.com]: ?
Note: Enter the Unisphere for VMAX location using a fully qualified Host ID.
Port [8443]: ?
Note: The Port default is the Unisphere for VMAX default secure port. If the secure port
is different for your Unisphere for VMAX setup, adjust this value accordingly.
User [smc]: ?
Note: Enter the Unisphere for VMAX username.
Password [smc]: ?
Note: Enter the Unisphere for VMAX password.
Type reg
again to view the current list:
(EMCProvider:5988) ? reg
Current list of statistics Access Points:
HostIDs:
l2se0060.lss.emc.com
PortNumbers:
8443
Users:
smc
Add Statistics Access Point {y|n} [n]: n
EMC VNX driver interacts with configured VNX array. It supports both iSCSI and FC protocol.
The VNX cinder driver performs the volume operations by executing Navisphere CLI (NaviSecCLI) which is a command-line interface used for management, diagnostics, and reporting functions for VNX. It also supports both iSCSI and FC protocol.
storops
to interact with VNX.This section contains instructions to prepare the Block Storage nodes to use the EMC VNX driver. You should install the Navisphere CLI and ensure you have correct zoning configurations.
storops
is a Python library that interacts with VNX array through
Navisphere CLI.
Use the following command to install the storops
library:
$ pip install storops
Make sure your have the following software installed for certain features:
Feature | Software Required |
---|---|
All | ThinProvisioning |
All | VNXSnapshots |
FAST cache support | FASTCache |
Create volume with type compressed |
Compression |
Create volume with type deduplicated |
Deduplication |
Required software
You can check the status of your array software in the Software page of Storage System Properties. Here is how it looks like:
For the FC Driver, FC zoning is properly configured between the hosts and the VNX. Check Register FC port with VNX for reference.
For the iSCSI Driver, make sure your VNX iSCSI port is accessible by your hosts. Check Register iSCSI port with VNX for reference.
You can use initiator_auto_registration = True
configuration to avoid
registering the ports manually. Check the detail of the configuration in
Back-end configuration for reference.
If you are trying to setup multipath, refer to Multipath setup.
Make the following changes in the /etc/cinder/cinder.conf
file.
Here is a sample of minimum back-end configuration. See the following sections
for the detail of each option.
Set storage_protocol = iscsi
if iSCSI protocol is used.
[DEFAULT]
enabled_backends = vnx_array1
[vnx_array1]
san_ip = 10.10.72.41
san_login = sysadmin
san_password = sysadmin
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc
Here is a sample of a minimum back-end configuration. See following sections
for the detail of each option.
Set storage_protocol = iscsi
if iSCSI protocol is used.
[DEFAULT]
enabled_backends = backendA, backendB
[backendA]
storage_vnx_pool_names = Pool_01_SAS, Pool_02_FLASH
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc
[backendB]
storage_vnx_pool_names = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
initiator_auto_registration = True
storage_protocol = fc
The value of the option storage_protocol
can be either fc
or iscsi
,
which is case insensitive.
For more details on multiple back ends, see Configure multiple-storage back ends
IP of the VNX Storage Processors
Specify SP A or SP B IP to connect:
san_ip = <IP of VNX Storage Processor>
VNX login credentials
There are two ways to specify the credentials.
Use plain text username and password.
Supply for plain username and password:
san_login = <VNX account with administrator role>
san_password = <password for VNX account>
storage_vnx_authentication_type = global
Valid values for storage_vnx_authentication_type
are: global
(default), local
, and ldap
.
Use Security file.
This approach avoids the plain text password in your cinder configuration file. Supply a security file as below:
storage_vnx_security_file_dir = <path to security file>
Check Unisphere CLI user guide or Authenticate by security file for how to create a security file.
Path to your Unisphere CLI
Specify the absolute path to your naviseccli:
naviseccli_path = /opt/Navisphere/bin/naviseccli
Driver’s storage protocol
For the FC Driver, add the following option:
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
storage_protocol = fc
For iSCSI Driver, add the following option:
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
storage_protocol = iscsi
Specify the list of pools to be managed, separated by commas. They should already exist in VNX.
storage_vnx_pool_names = pool 1, pool 2
If this value is not specified, all pools of the array will be used.
Initiator auto registration
When initiator_auto_registration
is set to True
, the driver will
automatically register initiators to all working target ports of the VNX array
during volume attaching (The driver will skip those initiators that have
already been registered) if the option io_port_list
is not specified in
the cinder.conf
file.
If the user wants to register the initiators with some specific ports but not register with the other ports, this functionality should be disabled.
When a comma-separated list is given to io_port_list
, the driver will only
register the initiator to the ports specified in the list and only return
target port(s) which belong to the target ports in the io_port_list
instead
of all target ports.
Example for FC ports:
io_port_list = a-1,B-3
a
or B
is Storage Processor, number 1
and 3
are
Port ID.
Example for iSCSI ports:
io_port_list = a-1-0,B-3-0
a
or B
is Storage Processor, the first numbers 1
and 3
are
Port ID and the second number 0
is Virtual Port ID
Note
io_port_list
or not.io_port_list
do not exist in VNX during startup.Some available
volumes may remain in storage group on the VNX array due to
some OpenStack timeout issue. But the VNX array do not allow the user to delete
the volumes which are in storage group. Option
force_delete_lun_in_storagegroup
is introduced to allow the user to delete
the available
volumes in this tricky situation.
When force_delete_lun_in_storagegroup
is set to True
in the back-end
section, the driver will move the volumes out of the storage groups and then
delete them if the user tries to delete the volumes that remain in the storage
group on the VNX array.
The default value of force_delete_lun_in_storagegroup
is False
.
Over subscription allows that the sum of all volume’s capacity (provisioned capacity) to be larger than the pool’s total capacity.
max_over_subscription_ratio
in the back-end section is the ratio of
provisioned capacity over total capacity.
The default value of max_over_subscription_ratio
is 20.0, which means
the provisioned capacity can be 20 times of the total capacity.
If the value of this ratio is set larger than 1.0, the provisioned
capacity can exceed the total capacity.
For volume attaching, the driver has a storage group on VNX for each compute
node hosting the vm instances which are going to consume VNX Block Storage
(using compute node’s host name as storage group’s name). All the volumes
attached to the VM instances in a Compute node will be put into the storage
group. If destroy_empty_storage_group
is set to True
, the driver will
remove the empty storage group after its last volume is detached. For data
safety, it does not suggest to set destroy_empty_storage_group=True
unless
the VNX is exclusively managed by one Block Storage node because consistent
lock_path
is required for operation synchronization for this behavior.
Enabling storage group automatic deletion is the precondition of this function.
If initiator_auto_deregistration
is set to True
is set, the driver will
deregister all FC and iSCSI initiators of the host after its storage group is
deleted.
The EMC VNX driver supports FC SAN auto zoning when ZoneManager
is
configured and zoning_mode
is set to fabric
in cinder.conf
.
For ZoneManager configuration, refer to Fibre Channel Zone Manager.
In VNX, there is a limitation on the number of pool volumes that can be created in the system. When the limitation is reached, no more pool volumes can be created even if there is remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the volume limitation, the creation fails.
The default value of check_max_pool_luns_threshold
is False
. When
check_max_pool_luns_threshold=True
, the pool-based back end will check the
limit and will report 0 free capacity to the scheduler if the limit is reached.
So the scheduler will be able to skip this kind of pool-based back end that
runs out of the pool volume number.
iscsi_initiators
is a dictionary of IP addresses of the iSCSI
initiator ports on OpenStack Compute and Block Storage nodes which want to
connect to VNX via iSCSI. If this option is configured, the driver will
leverage this information to find an accessible iSCSI target portal for the
initiator when attaching volume. Otherwise, the iSCSI target portal will be
chosen in a relative random way.
Note
This option is only valid for iSCSI driver.
Here is an example. VNX will connect host1
with 10.0.0.1
and
10.0.0.2
. And it will connect host2
with 10.0.0.3
.
The key name (host1
in the example) should be the output of
hostname command.
iscsi_initiators = {"host1":["10.0.0.1", "10.0.0.2"],"host2":["10.0.0.3"]}
Specify the timeout in minutes for operations like LUN migration, LUN creation, etc. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait.
The default value for this option is infinite
.
default_timeout = 60
The max_luns_per_storage_group
specify the maximum number of LUNs in a
storage group. Default value is 255. It is also the maximum value supported by
VNX.
If ignore_pool_full_threshold
is set to True
, driver will force LUN
creation even if the full threshold of pool is reached. Default to False
.
Extra specs are used in volume types created in Block Storage as the preferred property of the volume.
The Block Storage scheduler will use extra specs to find the suitable back end for the volume and the Block Storage driver will create the volume based on the properties specified by the extra spec.
Use the following command to create a volume type:
$ cinder type-create "demoVolumeType"
Use the following command to update the extra spec of a volume type:
$ cinder type-key "demoVolumeType" set provisioning:type=thin thick_provisioning_support='<is> True'
The following sections describe the VNX extra keys.
Key: provisioning:type
Possible Values:
thick
Volume is fully provisioned.
Run the following commands to create a thick
volume type:
$ cinder type-create "ThickVolumeType"
$ cinder type-key "ThickVolumeType" set provisioning:type=thick thick_provisioning_support='<is> True'
thin
Volume is virtually provisioned.
Run the following commands to create a thin
volume type:
$ cinder type-create "ThinVolumeType"
$ cinder type-key "ThinVolumeType" set provisioning:type=thin thin_provisioning_support='<is> True'
deduplicated
Volume is thin
and deduplication is enabled. The administrator shall
go to VNX to configure the system level deduplication settings. To
create a deduplicated volume, the VNX Deduplication license must be
activated on VNX, and specify deduplication_support=True
to let Block
Storage scheduler find the proper volume back end.
Run the following commands to create a deduplicated
volume type:
$ cinder type-create "DeduplicatedVolumeType"
$ cinder type-key "DeduplicatedVolumeType" set provisioning:type=deduplicated deduplication_support='<is> True'
compressed
Volume is thin
and compression is enabled. The administrator shall go
to the VNX to configure the system level compression settings. To create
a compressed volume, the VNX Compression license must be activated on
VNX, and use compression_support=True
to let Block Storage scheduler
find a volume back end. VNX does not support creating snapshots on a
compressed volume.
Run the following commands to create a compressed
volume type:
$ cinder type-create "CompressedVolumeType"
$ cinder type-key "CompressedVolumeType" set provisioning:type=compressed compression_support='<is> True'
Default: thick
Note
provisioning:type
replaces the old spec key storagetype:provisioning
.
The latter one is obsolete since the Mitaka release.
storagetype:tiering
StartHighThenAuto
Auto
HighestAvailable
LowestAvailable
NoMovement
StartHighThenAuto
VNX supports fully automated storage tiering which requires the FAST license
activated on the VNX. The OpenStack administrator can use the extra spec key
storagetype:tiering
to set the tiering policy of a volume and use the key
fast_support='<is> True'
to let Block Storage scheduler find a volume back
end which manages a VNX with FAST license activated. Here are the five
supported values for the extra spec key storagetype:tiering
:
Run the following commands to create a volume type with tiering policy:
$ cinder type-create "ThinVolumeOnAutoTier"
$ cinder type-key "ThinVolumeOnAutoTier" set provisioning:type=thin storagetype:tiering=Auto fast_support='<is> True'
Note
The tiering policy cannot be applied to a deduplicated volume. Tiering policy of the deduplicated LUN align with the settings of the pool.
fast_cache_enabled
True
False
False
VNX has FAST Cache feature which requires the FAST Cache license activated on
the VNX. Volume will be created on the backend with FAST cache enabled when
<is> True
is specified.
pool_name
If the user wants to create a volume on a certain storage pool in a back end that manages multiple pools, a volume type with a extra spec specified storage pool should be created first, then the user can use this volume type to create the volume.
Run the following commands to create the volume type:
$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set pool_name=Pool_02_SASFLASH volume_backend_name=vnx_41
Note
DO NOT use the following obsolete extra spec keys:
storagetype:provisioning
storagetype:pool
snapcopy
True
or true
False
or false
VNX driver supports snap copy which accelerates the process for creating a copied volume.
By default, the driver will do full data copy when creating a volume from a snapshot or cloning a volume. This is time-consuming, especially for large volumes. When snap copy is used, driver creates a snapshot and mounts it as a volume for the 2 kinds of operations which will be instant even for large volumes.
To enable this functionality, append --metadata snapcopy=True
when creating cloned volume or creating volume from snapshot.
$ cinder create --source-volid <source-void> --name "cloned_volume" --metadata snapcopy=True
Or
$ cinder create --snapshot-id <snapshot-id> --name "vol_from_snapshot" --metadata snapcopy=True
The newly created volume is a snap copy instead of a full copy. If a full copy is needed, retype or migrate can be used to convert the snap-copy volume to a full-copy volume which may be time-consuming.
You can determine whether the volume is a snap-copy volume or not by
showing its metadata. If the snapcopy
in metadata is True
or true
,
the volume is a snap-copy volume. Otherwise, it is a full-copy volume.
$ cinder metadata-show <volume>
Constraints
The default implementation in Block Storage for non-disruptive volume backup is not efficient since a cloned volume will be created during backup.
The approach of efficient backup is to create a snapshot for the volume and connect this snapshot (a mount point in VNX) to the Block Storage host for volume backup. This eliminates migration time involved in volume clone.
Constraints
in-use
since snapshot cannot be taken from this volume.VNX cinder driver is leveraging the LUN migration from the VNX. LUN migration
is involved in cloning, migrating, retyping, and creating volume from snapshot.
When admin set migrate_rate
in volume’s metadata
, VNX driver can start
migration with specified rate. The available values for the migrate_rate
are high
, asap
, low
and medium
.
The following is an example to set migrate_rate
to asap
:
$ cinder metadata <volume-id> set migrate_rate=asap
After set, any cinder volume operations involving VNX LUN migration will take the value as the migration rate. To restore the migration rate to default, unset the metadata as following:
$ cinder metadata <volume-id> unset migrate_rate
Note
Do not use the asap
migration rate when the system is in production, as the normal
host I/O may be interrupted. Use asap only when the system is offline
(free of any host-level I/O).
Cinder introduces Replication v2.1 support in Mitaka, it supports fail-over and fail-back replication for specific back end. In VNX cinder driver, MirrorView is used to set up replication for the volume.
To enable this feature, you need to set configuration in cinder.conf
as
below:
replication_device = backend_id:<secondary VNX serial number>,
san_ip:192.168.1.2,
san_login:admin,
san_password:admin,
naviseccli_path:/opt/Navisphere/bin/naviseccli,
storage_vnx_authentication_type:global,
storage_vnx_security_file_dir:
Currently, only synchronized mode MirrorView is supported, and one volume
can only have 1 secondary storage system. Therefore, you can have only one
replication_device
presented in driver configuration section.
To create a replication enabled volume, you need to create a volume type:
$ cinder type-create replication-type
$ cinder type-key replication-type set replication_enabled="<is> True"
And then create volume with above volume type:
$ cinder create --volume-type replication-type --name replication-volume 1
Supported operations
Create volume
Create cloned volume
Create volume from snapshot
Fail-over volume:
$ cinder failover-host --backend_id <secondary VNX serial number> <hostname>
Fail-back volume:
$ cinder failover-host --backend_id default <hostname>
Requirements
For more information on how to configure, please refer to: MirrorView-Knowledgebook:-Releases-30-–-33
Enabling multipath volume access is recommended for robust data access. The major configuration includes:
multipath-tools
, sysfsutils
and sg3-utils
on the
nodes hosting Nova-Compute and Cinder-Volume services. Check
the operating system manual for the system distribution for specific
installation steps. For Red Hat based distributions, they should be
device-mapper-multipath
, sysfsutils
and sg3_utils
.use_multipath_for_image_xfer=true
in the cinder.conf
file
for each FC/iSCSI back end.iscsi_use_multipath=True
in libvirt
section of the
nova.conf
file. This option is valid for both iSCSI and FC driver.For multipath-tools, here is an EMC recommended sample of
/etc/multipath.conf
file.
user_friendly_names
is not specified in the configuration and thus
it will take the default value no
. It is not recommended to set it
to yes
because it may fail operations such as VM live migration.
blacklist {
# Skip the files under /dev that are definitely not FC/iSCSI devices
# Different system may need different customization
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][0-9]*"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
# Skip LUNZ device from VNX
device {
vendor "DGC"
product "LUNZ"
}
}
defaults {
user_friendly_names no
flush_on_last_del yes
}
devices {
# Device attributed for EMC CLARiiON and VNX series ALUA
device {
vendor "DGC"
product ".*"
product_blacklist "LUNZ"
path_grouping_policy group_by_prio
path_selector "round-robin 0"
path_checker emc_clariion
features "1 queue_if_no_path"
hardware_handler "1 alua"
prio alua
failback immediate
}
}
Note
When multipath is used in OpenStack, multipath faulty devices may come out in Nova-Compute nodes due to different issues (Bug 1336683 is a typical example).
A solution to completely avoid faulty devices has not been found yet.
faulty_device_cleanup.py
mitigates this issue when VNX iSCSI storage is
used. Cloud administrators can deploy the script in all Nova-Compute nodes and
use a CRON job to run the script on each Nova-Compute node periodically so that
faulty devices will not stay too long. Refer to: VNX faulty device
cleanup for
detailed usage and the script.
EMC VNX iSCSI driver caches the iSCSI ports information, so that the user
should restart the cinder-volume
service or wait for seconds (which is
configured by periodic_interval
in the cinder.conf
file) before any
volume attachment operation after changing the iSCSI port configurations.
Otherwise the attachment may fail because the old iSCSI port configurations
were used.
VNX does not support extending the thick volume which has a snapshot. If the
user tries to extend a volume which has a snapshot, the status of the volume
would change to error_extending
.
It is not recommended to deploy the driver on a compute node if cinder
upload-to-image --force True
is used against an in-use volume. Otherwise,
cinder upload-to-image --force True
will terminate the data access of the
vm instance to the volume.
When the driver notices that there is no existing storage group that has the host name as the storage group name, it will create the storage group and also add the compute node’s or Block Storage node’s registered initiators into the storage group.
If the driver notices that the storage group already exists, it will assume that the registered initiators have also been put into it and skip the operations above for better performance.
It is recommended that the storage administrator does not create the storage group manually and instead relies on the driver for the preparation. If the storage administrator needs to create the storage group manually for some special requirements, the correct registered initiators should be put into the storage group as well (otherwise the following volume attaching operations will fail).
EMC VNX driver supports storage-assisted volume migration, when the user starts
migrating with cinder migrate --force-host-copy False <volume_id> <host>
or
cinder migrate <volume_id> <host>
, cinder will try to leverage the VNX’s
native volume migration functionality.
In following scenarios, VNX storage-assisted volume migration will not be triggered:
in-use
volume migration between back ends with different storage
protocol, for example, FC and iSCSI.VNX credentials are necessary when the driver connects to the VNX system.
Credentials in global
, local
and ldap
scopes are supported. There
are two approaches to provide the credentials.
The recommended one is using the Navisphere CLI security file to provide the credentials which can get rid of providing the plain text credentials in the configuration file. Following is the instruction on how to do this.
Find out the Linux user id of the cinder-volume
processes. Assuming the
cinder-volume
service is running by the account cinder
.
Run su
as root user.
In /etc/passwd
file, change
cinder:x:113:120::/var/lib/cinder:/bin/false
to cinder:x:113:120::/var/lib/cinder:/bin/bash
(This temporary change is
to make step 4 work.)
Save the credentials on behalf of cinder
user to a security file
(assuming the array credentials are admin/admin
in global
scope). In
the command below, the -secfilepath
switch is used to specify the
location to save the security file.
# su -l cinder -c '/opt/Navisphere/bin/naviseccli \
-AddUserSecurity -user admin -password admin -scope 0 -secfilepath <location>'
Change cinder:x:113:120::/var/lib/cinder:/bin/bash
back to
cinder:x:113:120::/var/lib/cinder:/bin/false
in /etc/passwd
file.
Remove the credentials options san_login
, san_password
and
storage_vnx_authentication_type
from cinder.conf
file. (normally
it is /etc/cinder/cinder.conf
file). Add option
storage_vnx_security_file_dir
and set its value to the directory path of
your security file generated in the above step. Omit this option if
-secfilepath
is not used in the above step.
Restart the cinder-volume
service to validate the change.
This configuration is only required when initiator_auto_registration=False
.
To access VNX storage, the Compute nodes should be registered on VNX first if initiator auto registration is not enabled.
To perform Copy Image to Volume
and Copy Volume to Image
operations,
the nodes running the cinder-volume
service (Block Storage nodes) must be
registered with the VNX as well.
The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
is the WWN of a
FC initiator port name of the compute node whose host name and IP are
myhost1
and 10.10.61.1
. Register
20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
in Unisphere:20:00:00:24:FF:48:BA:C2:21:00:00:24:FF:48:BA:C2
with SP Port A-1
appears.myhost1
10.10.61.1
10.10.61.1
will appear under
as well.wwn
with more ports if needed.This configuration is only required when initiator_auto_registration=False
.
To access VNX storage, the compute nodes should be registered on VNX first if initiator auto registration is not enabled.
To perform Copy Image to Volume
and Copy Volume to Image
operations,
the nodes running the cinder-volume
service (Block Storage nodes) must be
registered with the VNX as well.
The steps mentioned below are for the compute nodes. Follow the same steps for the Block Storage nodes also (The steps can be skipped if initiator auto registration is enabled).
On the compute node with IP address 10.10.61.1
and host name myhost1
,
execute the following commands (assuming 10.10.61.35
is the iSCSI
target):
Start the iSCSI initiator service on the node:
# /etc/init.d/open-iscsi start
Discover the iSCSI target portals on VNX:
# iscsiadm -m discovery -t st -p 10.10.61.35
Change directory to /etc/iscsi
:
# cd /etc/iscsi
Find out the iqn
of the node:
# more initiatorname.iscsi
Log in to VNX from the compute node using the target corresponding to the SPA port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l
Assume iqn.1993-08.org.debian:01:1a2b3c4d5f6g
is the initiator name of
the compute node. Register iqn.1993-08.org.debian:01:1a2b3c4d5f6g
in
Unisphere:
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
with SP Port A-8v0
appears.myhost1
10.10.61.1
10.10.61.1
will appear under
as well.Log out iSCSI on the node:
# iscsiadm -m node -u
Log in to VNX from the compute node using the target corresponding to the SPB port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.61.36 -l
In Unisphere
, register the initiator with the SPB port.
Log out iSCSI on the node:
# iscsiadm -m node -u
Register the iqn
with more ports if needed.
The high performance XtremIO All Flash Array (AFA) offers Block Storage services to OpenStack. Using the driver, OpenStack Block Storage hosts can connect to an XtremIO Storage cluster.
This section explains how to configure and connect the block storage nodes to an XtremIO storage cluster.
XtremIO version 4.x is supported.
Edit the cinder.conf
file by adding the configuration below under
the [DEFAULT] section of the file in case of a single back end or
under a separate section in case of multiple back ends (for example
[XTREMIO]). The configuration file is usually located under the
following path /etc/cinder/cinder.conf
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
xtremio_array_busy_retry_count = 5 |
(Integer) Number of retries in case array is busy |
xtremio_array_busy_retry_interval = 5 |
(Integer) Interval between retries in case array is busy |
xtremio_cluster_name = |
(String) XMS cluster id in multi-cluster environment |
xtremio_volumes_per_glance_cache = 100 |
(Integer) Number of volumes created from each cached glance image |
For a configuration example, refer to the configuration Configuration example.
Configure the driver name by setting the following parameter in the
cinder.conf
file:
For iSCSI:
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver
For Fibre Channel:
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
To retrieve the management IP, use the show-xms CLI command.
Configure the management IP by adding the following parameter:
san_ip = XMS Management IP
In XtremIO version 4.0, a single XMS can manage multiple cluster back ends. In such setups, the administrator is required to specify the cluster name (in addition to the XMS IP). Each cluster must be defined as a separate back end.
To retrieve the cluster name, run the show-clusters CLI command.
Configure the cluster name by adding the following parameter:
xtremio_cluster_name = Cluster-Name
Note
When a single cluster is managed in XtremIO version 4.0, the cluster name is not required.
OpenStack Block Storage requires an XtremIO XMS user with administrative privileges. XtremIO recommends creating a dedicated OpenStack user account that holds an administrative user role.
Refer to the XtremIO User Guide for details on user account management.
Create an XMS account using either the XMS GUI or the add-user-account CLI command.
Configure the user credentials by adding the following parameters:
san_login = XMS username
san_password = XMS username password
Configuring multiple storage back ends enables you to create several back-end storage solutions that serve the same OpenStack Compute resources.
When a volume is created, the scheduler selects the appropriate back end to handle the request, according to the specified volume type.
To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:
Thin Provisioning
All XtremIO volumes are thin provisioned. The default value of 20 should be
maintained for the max_over_subscription_ratio
parameter.
The use_cow_images
parameter in the nova.conf
file should be set to
False
as follows:
use_cow_images = False
Multipathing
The use_multipath_for_image_xfer
parameter in the cinder.conf
file
should be set to True
as follows:
use_multipath_for_image_xfer = True
Limit the number of copies (XtremIO snapshots) taken from each image cache.
xtremio_volumes_per_glance_cache = 100
The default value is 100
. A value of 0
ignores the limit and defers to
the array maximum as the effective limit.
To enable SSL certificate validation, modify the following option in the
cinder.conf
file:
driver_ssl_cert_verify = true
By default, SSL certificate validation is disabled.
To specify a non-default path to CA_Bundle
file or directory with
certificates of trusted CAs:
driver_ssl_cert_path = Certificate path
The XtremIO Block Storage driver supports CHAP initiator authentication and discovery.
If CHAP initiator authentication is required, set the CHAP Authentication mode to initiator.
To set the CHAP initiator mode using CLI, run the following XMCLI command:
$ modify-chap chap-authentication-mode=initiator
If CHAP initiator discovery is required, set the CHAP discovery mode to initiator.
To set the CHAP initiator discovery mode using CLI, run the following XMCLI command:
$ modify-chap chap-discovery-mode=initiator
The CHAP initiator modes can also be set via the XMS GUI.
Refer to XtremIO User Guide for details on CHAP configuration via GUI and CLI.
The CHAP initiator authentication and discovery credentials (username and password) are generated automatically by the Block Storage driver. Therefore, there is no need to configure the initial CHAP credentials manually in XMS.
You can update the cinder.conf
file by editing the necessary parameters as
follows:
[Default]
enabled_backends = XtremIO
[XtremIO]
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
san_ip = XMS_IP
xtremio_cluster_name = Cluster01
san_login = XMS_USER
san_password = XMS_PASSWD
volume_backend_name = XtremIOAFA
Fujitsu ETERNUS DX driver provides FC and iSCSI support for ETERNUS DX S3 series.
The driver performs volume operations by communicating with ETERNUS DX. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP.
You can specify RAID Group and Thin Provisioning Pool (TPP) in ETERNUS DX as a storage pool.
Supported storages:
Requirements:
(*1): It is executable only when you use TPP as a storage pool.
Install the python-pywbem
package for your distribution.
On Ubuntu:
# apt-get install python-pywbem
On openSUSE:
# zypper install python-pywbem
On Red Hat Enterprise Linux, CentOS, and Fedora:
# yum install pywbem
Perform the following steps using ETERNUS Web GUI or ETERNUS CLI.
Note
Admin
role.Create an account for communication with cinder controller.
Enable the SMI-S of ETERNUS DX.
Register an Advanced Copy Feature license and configure copy table size.
Create a storage pool for volumes.
(Optional) If you want to create snapshots on a different storage pool for volumes, create a storage pool for snapshots.
Create Snap Data Pool Volume (SDPV) to enable Snap Data Pool (SDP) for
“create a snapshot
”.
Configure storage ports used for OpenStack.
Set those storage ports to CA mode.
Enable the host-affinity settings of those storage ports.
(ETERNUS CLI command for enabling host-affinity settings):
CLI> set fc-parameters -host-affinity enable -port <CM#><CA#><Port#>
CLI> set iscsi-parameters -host-affinity enable -port <CM#><CA#><Port#>
Ensure LAN connection between cinder controller and MNT port of ETERNUS DX and SAN connection between Compute nodes and CA ports of ETERNUS DX.
Edit cinder.conf
.
Add the following entries to /etc/cinder/cinder.conf
:
FC entries:
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
iSCSI entries:
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
cinder_eternus_config_file = /etc/cinder/eternus_dx.xml
If there is no description about cinder_eternus_config_file
,
then the parameter is set to default value /etc/cinder/cinder_fujitsu_eternus_dx.xml
.
Create a driver configuration file.
Create a driver configuration file in the file path specified
as cinder_eternus_config_file
in cinder.conf
,
and add parameters to the file as below:
FC configuration:
<?xml version='1.0' encoding='UTF-8'?>
<FUJITSU>
<EternusIP>0.0.0.0</EternusIP>
<EternusPort>5988</EternusPort>
<EternusUser>smisuser</EternusUser>
<EternusPassword>smispassword</EternusPassword>
<EternusPool>raid5_0001</EternusPool>
<EternusSnapPool>raid5_0001</EternusSnapPool>
</FUJITSU>
iSCSI configuration:
<?xml version='1.0' encoding='UTF-8'?>
<FUJITSU>
<EternusIP>0.0.0.0</EternusIP>
<EternusPort>5988</EternusPort>
<EternusUser>smisuser</EternusUser>
<EternusPassword>smispassword</EternusPassword>
<EternusPool>raid5_0001</EternusPool>
<EternusSnapPool>raid5_0001</EternusSnapPool>
<EternusISCSIIP>1.1.1.1</EternusISCSIIP>
<EternusISCSIIP>1.1.1.2</EternusISCSIIP>
<EternusISCSIIP>1.1.1.3</EternusISCSIIP>
<EternusISCSIIP>1.1.1.4</EternusISCSIIP>
</FUJITSU>
Where:
EternusIP
IP address for the SMI-S connection of the ETRENUS DX.
Enter the IP address of MNT port of the ETERNUS DX.
EternusPort
Port number for the SMI-S connection port of the ETERNUS DX.
EternusUser
User name for the SMI-S connection of the ETERNUS DX.
EternusPassword
Password for the SMI-S connection of the ETERNUS DX.
EternusPool
Storage pool name for volumes.
Enter RAID Group name or TPP name in the ETERNUS DX.
EternusSnapPool
Storage pool name for snapshots.
Enter RAID Group name in the ETERNUS DX.
EternusISCSIIP
(Multiple setting allowed)iSCSI connection IP address of the ETERNUS DX.
Note
EternusSnapPool
, you can specify only RAID Group name
and cannot specify TPP name.EternusPool
and EternusSnapPool
if you create volumes and snapshots on a same storage pool.Edit cinder.conf
:
[DEFAULT]
enabled_backends = DXFC, DXISCSI
[DXFC]
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_fc.FJDXFCDriver
cinder_eternus_config_file = /etc/cinder/fc.xml
volume_backend_name = FC
[DXISCSI]
volume_driver = cinder.volume.drivers.fujitsu.eternus_dx_iscsi.FJDXISCSIDriver
cinder_eternus_config_file = /etc/cinder/iscsi.xml
volume_backend_name = ISCSI
Create the driver configuration files fc.xml
and iscsi.xml
.
Create a volume type and set extra specs to the type:
$ cinder type-create DX_FC
$ cinder type-key DX_FC set volume_backend_name=FC
$ cinder type-create DX_ISCSI
$ cinder type-key DX_ISCSI set volume_backend_name=ISCSI
By issuing these commands,
the volume type DX_FC
is associated with the FC
,
and the type DX_ISCSI
is associated with the ISCSI
.
This OpenStack Block Storage volume drivers provides iSCSI and NFS support for Hitachi NAS Platform (HNAS) Models 3080, 3090, 4040, 4060, 4080, and 4100 with NAS OS 12.2 or higher.
The NFS and iSCSI drivers support these operations:
Before using iSCSI and NFS services, use the HNAS configuration and management GUI (SMU) or SSC CLI to configure HNAS to work with the drivers. Additionally:
1 storage pool, 1 EVS and 1 file
system
to be able to run any of the HNAS drivers.replication target
and
should be mounted.all compute nodes and controllers
in the cloud must have
access to the EVSs./
) and set the :guilabel: Show snapshots option to hide and
disable access
.norootsquash
in the share
Access configuration
so Block Storage services can change the
permissions of its volumes. For example, "* (rw, norootsquash)"
.max-nfs-version
to 3. Refer to Hitachi NAS Platform
command line reference to see how to configure this option.The HNAS drivers are supported for Red Hat Enterprise Linux OpenStack Platform, SUSE OpenStack Cloud, and Ubuntu OpenStack. The following packages must be installed in all compute, controller and storage (if any) nodes:
nfs-utils
for Red Hat Enterprise Linux OpenStack Platformnfs-client
for SUSE OpenStack Cloudnfs-common
, libc6-i386
for Ubuntu OpenStackIf you are installing the driver from an RPM or DEB package, follow the steps below:
Install the dependencies:
In Red Hat:
# yum install nfs-utils nfs-utils-lib
Or in Ubuntu:
# apt-get install nfs-common
Or in SUSE:
# zypper install nfs-client
If you are using Ubuntu 12.04, you also need to install libc6-i386
# apt-get install libc6-i386
Configure the driver as described in the Driver configuration section.
Restart all Block Storage services (volume, scheduler, and backup).
HNAS supports a variety of storage options and file system capabilities,
which are selected through the definition of volume types combined with the
use of multiple back ends and multiple services. Each back end can configure
up to 4 service pools
, which can be mapped to cinder volume types.
The configuration for the driver is read from the back-end sections of the
cinder.conf
. Each back-end section must have the appropriate configurations
to communicate with your HNAS back end, such as the IP address of the HNAS EVS
that is hosting your data, HNAS SSH access credentials, the configuration of
each of the services in that back end, and so on. You can find examples of such
configurations in the Configuration example section.
Note
HNAS cinder drivers still support the XML configuration the
same way it was in the older versions, but we recommend configuring the
HNAS cinder drivers only through the cinder.conf
file,
since the XML configuration file from previous versions is being
deprecated as of Newton Release.
Note
We do not recommend the use of the same NFS export or file system (iSCSI driver) for different back ends. If possible, configure each back end to use a different NFS export/file system.
The following is the definition of each configuration option that can be used
in a HNAS back-end section in the cinder.conf
file:
Option | Type | Default | Description |
---|---|---|---|
volume_backend_name |
Optional | N/A | A name that identifies the back end and can be used as an extra-spec to redirect the volumes to the referenced back end. |
volume_driver |
Required | N/A | The python module path to the HNAS volume driver python class. When installing through the rpm or deb packages, you should configure this to cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver for the iSCSI back end or cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver for the NFS back end. |
nfs_shares_config |
Required (only for NFS) | /etc/cinder/nfs_shares | Path to the nfs_shares file. This is required by the base cinder
generic NFS driver and therefore also required by the HNAS NFS driver.
This file should list, one per line, every NFS share being used by the
back end. For example, all the values found in the configuration keys
hnas_svcX_hdp in the HNAS NFS back-end sections. |
hnas_mgmt_ip0 |
Required | N/A | HNAS management IP address. Should be the IP address of the Admin EVS. It is also the IP through which you access the web SMU administration frontend of HNAS. |
hnas_chap_enabled |
Optional (iSCSI only) | True | Boolean tag used to enable CHAP authentication protocol for iSCSI driver. |
hnas_username |
Required | N/A | HNAS SSH username |
hds_hnas_nfs_config_file | hds_hnas_iscsi_config_file |
Optional (deprecated) | /opt/hds/hnas/cinder_[nfs|iscsi]_conf.xml | Path to the deprecated XML configuration file (only required if using the XML file) |
hnas_cluster_admin_ip0 |
Optional (required only for HNAS multi-farm setups) | N/A | The IP of the HNAS farm admin. If your SMU controls more than one system or cluster, this option must be set with the IP of the desired node. This is different for HNAS multi-cluster setups, which does not require this option to be set. |
hnas_ssh_private_key |
Optional | N/A | Path to the SSH private key used to authenticate to the HNAS SMU. Only required if you do not want to set hnas_password. |
hnas_ssh_port |
Optional | 22 | Port on which HNAS is listening for SSH connections |
hnas_password |
Required (unless hnas_ssh_private_key is provided) | N/A | HNAS password |
hnas_svcX_hdp [1] |
Required (at least 1) | N/A | HDP (export or file system) where the volumes will be created. Use exports paths for the NFS backend or the file system names for the iSCSI backend (note that when using the file system name, it does not contain the IP addresses of the HDP) |
hnas_svcX_iscsi_ip |
Required (only for iSCSI) | N/A | The IP of the EVS that contains the file system specified in hnas_svcX_hdp |
hnas_svcX_volume_type |
Required | N/A | A unique string that is used to refer to this pool within the
context of cinder. You can tell cinder to put volumes of a specific
volume type into this back end, within this pool. See,
Service Labels and Configuration example sections
for more details. |
[1] | Replace X with a number from 0 to 3 (keep the sequence when configuring the driver) |
HNAS driver supports differentiated types of service using the service labels. It is possible to create up to 4 types of them for each back end. (For example gold, platinum, silver, ssd, and so on).
After creating the services in the cinder.conf
configuration file, you
need to configure one cinder volume_type
per service. Each volume_type
must have the metadata service_label with the same name configured in the
hnas_svcX_volume_type option
of that service. See the
Configuration example section for more details. If the volume_type
is not set, the cinder service pool with largest available free space or
other criteria configured in scheduler filters.
$ cinder type-create default
$ cinder type-key default set service_label=default
$ cinder type-create platinum-tier
$ cinder type-key platinum set service_label=platinum
You can deploy multiple OpenStack HNAS Driver instances (back ends) that each controls a separate HNAS or a single HNAS. If you use multiple cinder back ends, remember that each cinder back end can host up to 4 services. Each back-end section must have the appropriate configurations to communicate with your HNAS back end, such as the IP address of the HNAS EVS that is hosting your data, HNAS SSH access credentials, the configuration of each of the services in that back end, and so on. You can find examples of such configurations in the Configuration example section.
If you want the volumes from a volume_type to be casted into a specific
back end, you must configure an extra_spec in the volume_type
with the
value of the volume_backend_name
option from that back end.
For multiple NFS back ends configuration, each back end should have a
separated nfs_shares_config
and also a separated nfs_shares file
defined (For example, nfs_shares1
, nfs_shares2
) with the desired
shares listed in separated lines.
Note
As of the Newton OpenStack release, the user can no longer run the driver using a locally installed instance of the SSC utility package. Instead, all communications with the HNAS back end are handled through SSH.
You can use your username and password to authenticate the Block Storage node
to the HNAS back end. In order to do that, simply configure hnas_username
and hnas_password
in your back end section within the cinder.conf
file.
For example:
[hnas-backend]
…
hnas_username = supervisor
hnas_password = supervisor
Alternatively, the HNAS cinder driver also supports SSH authentication through public key. To configure that:
If you do not have a pair of public keys already generated, create it in the Block Storage node (leave the pass-phrase empty):
$ mkdir -p /opt/hitachi/ssh
$ ssh-keygen -f /opt/hds/ssh/hnaskey
Change the owner of the key to cinder (or the user the volume service will be run as):
# chown -R cinder.cinder /opt/hitachi/ssh
Create the directory ssh_keys
in the SMU server:
$ ssh [manager|supervisor]@<smu-ip> 'mkdir -p /var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/'
Copy the public key to the ssh_keys
directory:
$ scp /opt/hitachi/ssh/hnaskey.pub [manager|supervisor]@<smu-ip>:/var/opt/mercury-main/home/[manager|supervisor]/ssh_keys/
Access the SMU server:
$ ssh [manager|supervisor]@<smu-ip>
Run the command to register the SSH keys:
$ ssh-register-public-key -u [manager|supervisor] -f ssh_keys/hnaskey.pub
Check the communication with HNAS in the Block Storage node:
For multi-farm HNAS:
$ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc <cluster_admin_ip0> df -a'
Or, for Single-node/Multi-Cluster:
$ ssh -i /opt/hitachi/ssh/hnaskey [manager|supervisor]@<smu-ip> 'ssc localhost df -a'
Configure your backend section in cinder.conf
to use your public key:
[hnas-backend]
…
hnas_ssh_private_key = /opt/hitachi/ssh/hnaskey
If there are some existing volumes on HNAS that you want to import to cinder, it is possible to use the manage volume feature to do this. The manage action on an existing volume is very similar to a volume creation. It creates a volume entry on cinder database, but instead of creating a new volume in the back end, it only adds a link to an existing volume.
Note
It is an admin only feature and you have to be logged as an user with admin rights to be able to use this.
For NFS:
For iSCSI:
By CLI:
$ cinder manage [--id-type <id-type>][--name <name>][--description <description>]
[--volume-type <volume-type>][--availability-zone <availability-zone>]
[--metadata [<key=value> [<key=value> ...]]][--bootable] <host> <identifier>
Example:
For NFS:
$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-nfs#test_silver 172.24.44.34:/silver/volume-test
For iSCSI:
$ cinder manage --name volume-test --volume-type silver
ubuntu@hnas-iscsi#test_silver filesystem-test/volume-test
The manage snapshots feature works very similarly to the manage volumes feature, currently supported on HNAS cinder drivers. So, if you have a volume already managed by cinder which has snapshots that are not managed by cinder, it is possible to use manage snapshots to import these snapshots and link them with their original volume.
Note
For HNAS NFS cinder driver, the snapshots of volumes are clones of volumes that where created using file-clone-create, not the HNAS snapshot-* feature. Check the HNAS users documentation to have details about those 2 features.
Currently, the manage snapshots function does not support importing snapshots
(generally created by storage’s file-clone operation)
without parent volumes
or when the parent volume is in-use
. In this
case, the manage volumes
should be used to import the snapshot as a normal
cinder volume.
Also, it is an admin only feature and you have to be logged as a user with admin rights to be able to use this.
Note
Although there is a verification to prevent importing snapshots using non-related volumes as parents, it is possible to manage a snapshot using any related cloned volume. So, when managing a snapshot, it is extremely important to make sure that you are using the correct parent volume.
For NFS:
$ cinder snapshot-manage <volume> <identifier>
Example:
$ cinder snapshot-manage 061028c0-60cf-499f-99e2-2cd6afea081f 172.24.44.34:/export1/snapshot-test
Note
This feature is currently available only for HNAS NFS Driver.
Below are configuration examples for both NFS and iSCSI backends:
HNAS NFS Driver
For HNAS NFS driver, create this section in your cinder.conf
file:
[hnas-nfs]
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
nfs_shares_config = /home/cinder/nfs_shares
volume_backend_name = hnas_nfs_backend
hnas_username = supervisor
hnas_password = supervisor
hnas_mgmt_ip0 = 172.24.44.15
hnas_svc0_volume_type = nfs_gold
hnas_svc0_hdp = 172.24.49.21:/gold_export
hnas_svc1_volume_type = nfs_platinum
hnas_svc1_hdp = 172.24.49.21:/silver_platinum
hnas_svc2_volume_type = nfs_silver
hnas_svc2_hdp = 172.24.49.22:/silver_export
hnas_svc3_volume_type = nfs_bronze
hnas_svc3_hdp = 172.24.49.23:/bronze_export
Add it to the enabled_backends
list, under the DEFAULT
section
of your cinder.conf
file:
[DEFAULT]
enabled_backends = hnas-nfs
Add the configured exports to the nfs_shares
file:
172.24.49.21:/gold_export
172.24.49.21:/silver_platinum
172.24.49.22:/silver_export
172.24.49.23:/bronze_export
Register a volume type with cinder and associate it with this backend:
$cinder type-create hnas_nfs_gold
$cinder type-key hnas_nfs_gold set volume_backend_name=hnas_nfs_backend service_label=nfs_gold
$cinder type-create hnas_nfs_platinum
$cinder type-key hnas_nfs_platinum set volume_backend_name=hnas_nfs_backend service_label=nfs_platinum
$cinder type-create hnas_nfs_silver
$cinder type-key hnas_nfs_silver set volume_backend_name=hnas_nfs_backend service_label=nfs_silver
$cinder type-create hnas_nfs_bronze
$cinder type-key hnas_nfs_bronze set volume_backend_name=hnas_nfs_backend service_label=nfs_bronze
HNAS iSCSI Driver
For HNAS iSCSI driver, create this section in your cinder.conf
file:
[hnas-iscsi]
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
volume_backend_name = hnas_iscsi_backend
hnas_username = supervisor
hnas_password = supervisor
hnas_mgmt_ip0 = 172.24.44.15
hnas_chap_enabled = True
hnas_svc0_volume_type = iscsi_gold
hnas_svc0_hdp = FS-gold
hnas_svc0_iscsi_ip = 172.24.49.21
hnas_svc1_volume_type = iscsi_platinum
hnas_svc1_hdp = FS-platinum
hnas_svc1_iscsi_ip = 172.24.49.21
hnas_svc2_volume_type = iscsi_silver
hnas_svc2_hdp = FS-silver
hnas_svc2_iscsi_ip = 172.24.49.22
hnas_svc3_volume_type = iscsi_bronze
hnas_svc3_hdp = FS-bronze
hnas_svc3_iscsi_ip = 172.24.49.23
Add it to the enabled_backends
list, under the DEFAULT
section
of your cinder.conf
file:
[DEFAULT]
enabled_backends = hnas-nfs, hnas-iscsi
Register a volume type with cinder and associate it with this backend:
$cinder type-create hnas_iscsi_gold
$cinder type-key hnas_iscsi_gold set volume_backend_name=hnas_iscsi_backend service_label=iscsi_gold
$cinder type-create hnas_iscsi_platinum
$cinder type-key hnas_iscsi_platinum set volume_backend_name=hnas_iscsi_backend service_label=iscsi_platinum
$cinder type-create hnas_iscsi_silver
$cinder type-key hnas_iscsi_silver set volume_backend_name=hnas_iscsi_backend service_label=iscsi_silver
$cinder type-create hnas_iscsi_bronze
$cinder type-key hnas_iscsi_bronze set volume_backend_name=hnas_iscsi_backend service_label=iscsi_bronze
The get_volume_stats()
function always provides the available
capacity based on the combined sum of all the HDPs that are used in
these services labels.
After changing the configuration on the storage node, the Block Storage driver must be restarted.
On Red Hat, if the system is configured to use SELinux, you need to
set virt_use_nfs = on
for NFS driver work properly.
# setsebool -P virt_use_nfs on
It is not possible to manage a volume if there is a slash (/
) or
a colon (:
) in the volume name.
File system auto-expansion
: Although supported, we do not recommend using
file systems with auto-expansion setting enabled because the scheduler uses
the file system capacity reported by the driver to determine if new volumes
can be created. For instance, in a setup with a file system that can expand
to 200GB but is at 100GB capacity, with 10GB free, the scheduler will not
allow a 15GB volume to be created. In this case, manual expansion would
have to be triggered by an administrator. We recommend always creating the
file system at the maximum capacity
or periodically expanding the file
system manually.
iSCSI driver limitations: The iSCSI driver has a limit of 1024
volumes
attached to instances.
The hnas_svcX_volume_type
option must be unique for a given back end.
SSC simultaneous connections limit: In very busy environments, if 2 or
more volume hosts are configured to use the same storage, some requests
(create, delete and so on) can have some attempts failed and re-tried (
5 attempts
by default) due to an HNAS connection limitation (
max of 5
simultaneous connections).
Hitachi storage volume driver provides iSCSI and Fibre Channel support for Hitachi storages.
Supported storages:
Required software:
RAID Manager Ver 01-32-03/01 or later for VSP G1000/VSP/HUS VM
Hitachi Storage Navigator Modular 2 (HSNM2) Ver 27.50 or later for HUS 100 Family
Note
HSNM2 needs to be installed under /usr/stonavm
.
Required licenses:
Additionally, the pexpect
package is required.
You need to specify settings as described below. For details about each step,
see the user’s guide of the storage device. Use a storage administrative
software such as Storage Navigator
to set up the storage device so that
LDEVs and host groups can be created and deleted, and LDEVs can be connected
to the server and can be asynchronously copied.
port security
to enable
for the
ports at the storage.Host Group security
or
iSCSI target security
to ON
for the ports at the storage.ON
.Change a parameter of the hfcldd driver and update the initram
file
if Hitachi Gigabit Fibre Channel adaptor is used:
# /opt/hitachi/drivers/hba/hfcmgr -E hfc_rport_lu_scan 1
# dracut -f initramfs-KERNEL_VERSION.img KERNEL_VERSION
# reboot
Create a directory:
# mkdir /var/lock/hbsd
# chown cinder:cinder /var/lock/hbsd
Create volume type
and volume key
.
This example shows that HUS100_SAMPLE is created as volume type
and hus100_backend is registered as volume key
:
$ cinder type-create HUS100_SAMPLE
$ cinder type-key HUS100_SAMPLE set volume_backend_name=hus100_backend
Specify any identical volume type
name and volume key
.
To confirm the created volume type
, please execute the following
command:
$ cinder extra-specs-list
Edit the /etc/cinder/cinder.conf
file as follows.
If you use Fibre Channel:
volume_driver = cinder.volume.drivers.hitachi.hbsd_fc.HBSDFCDriver
If you use iSCSI:
volume_driver = cinder.volume.drivers.hitachi.hbsd_iscsi.HBSDISCSIDriver
Also, set volume_backend_name
created by cinder type-key
command:
volume_backend_name = hus100_backend
This table shows configuration options for Hitachi storage volume driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hitachi_add_chap_user = False |
(Boolean) Add CHAP user |
hitachi_async_copy_check_interval = 10 |
(Integer) Interval to check copy asynchronously |
hitachi_auth_method = None |
(String) iSCSI authentication method |
hitachi_auth_password = HBSD-CHAP-password |
(String) iSCSI authentication password |
hitachi_auth_user = HBSD-CHAP-user |
(String) iSCSI authentication username |
hitachi_copy_check_interval = 3 |
(Integer) Interval to check copy |
hitachi_copy_speed = 3 |
(Integer) Copy speed of storage system |
hitachi_default_copy_method = FULL |
(String) Default copy method of storage system |
hitachi_group_range = None |
(String) Range of group number |
hitachi_group_request = False |
(Boolean) Request for creating HostGroup or iSCSI Target |
hitachi_horcm_add_conf = True |
(Boolean) Add to HORCM configuration |
hitachi_horcm_numbers = 200,201 |
(String) Instance numbers for HORCM |
hitachi_horcm_password = None |
(String) Password of storage system for HORCM |
hitachi_horcm_resource_lock_timeout = 600 |
(Integer) Timeout until a resource lock is released, in seconds. The value must be between 0 and 7200. |
hitachi_horcm_user = None |
(String) Username of storage system for HORCM |
hitachi_ldev_range = None |
(String) Range of logical device of storage system |
hitachi_pool_id = None |
(Integer) Pool ID of storage system |
hitachi_serial_number = None |
(String) Serial number of storage system |
hitachi_target_ports = None |
(String) Control port names for HostGroup or iSCSI Target |
hitachi_thin_pool_id = None |
(Integer) Thin pool ID of storage system |
hitachi_unit_name = None |
(String) Name of an array unit |
hitachi_zoning_request = False |
(Boolean) Request for FC Zone creating HostGroup |
hnas_chap_enabled = True |
(Boolean) Whether the chap authentication is enabled in the iSCSI target or not. |
hnas_cluster_admin_ip0 = None |
(String) The IP of the HNAS cluster admin. Required only for HNAS multi-cluster setups. |
hnas_mgmt_ip0 = None |
(IP) Management IP address of HNAS. This can be any IP in the admin address on HNAS or the SMU IP. |
hnas_password = None |
(String) HNAS password. |
hnas_ssc_cmd = ssc |
(String) Command to communicate to HNAS. |
hnas_ssh_port = 22 |
(Port number) Port to be used for SSH authentication. |
hnas_ssh_private_key = None |
(String) Path to the SSH private key used to authenticate in HNAS SMU. |
hnas_svc0_hdp = None |
(String) Service 0 HDP |
hnas_svc0_iscsi_ip = None |
(IP) Service 0 iSCSI IP |
hnas_svc0_volume_type = None |
(String) Service 0 volume type |
hnas_svc1_hdp = None |
(String) Service 1 HDP |
hnas_svc1_iscsi_ip = None |
(IP) Service 1 iSCSI IP |
hnas_svc1_volume_type = None |
(String) Service 1 volume type |
hnas_svc2_hdp = None |
(String) Service 2 HDP |
hnas_svc2_iscsi_ip = None |
(IP) Service 2 iSCSI IP |
hnas_svc2_volume_type = None |
(String) Service 2 volume type |
hnas_svc3_hdp = None |
(String) Service 3 HDP |
hnas_svc3_iscsi_ip = None |
(IP) Service 3 iSCSI IP |
hnas_svc3_volume_type = None |
(String) Service 3 volume type |
hnas_username = None |
(String) HNAS username. |
Restart the Block Storage service.
When the startup is done, “MSGID0003-I: The storage backend can be used.”
is output into /var/log/cinder/volume.log
as follows:
2014-09-01 10:34:14.169 28734 WARNING cinder.volume.drivers.hitachi.
hbsd_common [req-a0bb70b5-7c3f-422a-a29e-6a55d6508135 None None]
MSGID0003-I: The storage backend can be used. (config_group: hus100_backend)
The HPE3PARFCDriver
and HPE3PARISCSIDriver
drivers, which are based on
the Block Storage service (Cinder) plug-in architecture, run volume operations
by communicating with the HPE 3PAR storage system over HTTP, HTTPS, and SSH
connections. The HTTP and HTTPS communications use python-3parclient
,
which is part of the Python standard library.
For information about how to manage HPE 3PAR storage systems, see the HPE 3PAR user documentation.
To use the HPE 3PAR drivers, install the following software and components on the HPE 3PAR storage system:
python-3parclient
version 4.2.0 or
newer from the Python standard library on the system with the enabled Block
Storage service volume drivers.Volume type support for both HPE 3PAR drivers includes the ability to set the
following capabilities in the OpenStack Block Storage API
cinder.api.contrib.types_extra_specs
volume type extra specs extension
module:
hpe3par:snap_cpg
hpe3par:provisioning
hpe3par:persona
hpe3par:vvs
hpe3par:flash_cache
To work with the default filter scheduler, the key values are case sensitive
and scoped with hpe3par:
. For information about how to set the key-value
pairs and associate them with a volume type, run the following command:
$ cinder help type-key
Note
Volumes that are cloned only support the extra specs keys cpg, snap_cpg, provisioning and vvs. The others are ignored. In addition the comments section of the cloned volume in the HPE 3PAR StoreServ storage array is not populated.
If volume types are not used or a particular key is not set for a volume type, the following defaults are used:
hpe3par:cpg
- Defaults to the hpe3par_cpg
setting in the
cinder.conf
file.hpe3par:snap_cpg
- Defaults to the hpe3par_snap
setting in
the cinder.conf
file. If hpe3par_snap
is not set, it defaults
to the hpe3par_cpg
setting.hpe3par:provisioning
- Defaults to thin
provisioning, the valid
values are thin
, full
, and dedup
.hpe3par:persona
- Defaults to the 2 - Generic-ALUA
persona. The
valid values are:1 - Generic
2 - Generic-ALUA
3 - Generic-legacy
4 - HPUX-legacy
5 - AIX-legacy
6 - EGENERA
7 - ONTAP-legacy
8 - VMware
9 - OpenVMS
10 - HPUX
11 - WindowsServer
hpe3par:flash_cache
- Defaults to false
, the valid values are
true
and false
.QoS support for both HPE 3PAR drivers includes the ability to set the
following capabilities in the OpenStack Block Storage API
cinder.api.contrib.qos_specs_manage
qos specs extension module:
minBWS
maxBWS
minIOPS
maxIOPS
latency
priority
The qos keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:
$ cinder help qos-create
$ cinder help qos-key
$ cinder help qos-associate
The following keys require that the HPE 3PAR StoreServ storage array has a Priority Optimization license installed.
hpe3par:vvs
hpe3par:vvs
, the qos_specs minIOPS
, maxIOPS
,
minBWS
, and maxBWS
settings are ignored.minBWS
maxBWS
minIOPS
maxIOPS
latency
priority
normal
, valid values are low
, normal
and high
.Note
Since the Icehouse release, minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used together. If only one is set the other will be set to the same value.
The following key requires that the HPE 3PAR StoreServ storage array has an Adaptive Flash Cache license installed.
hpe3par:flash_cache
- The flash-cache policy, which can be turned on and
off by setting the value to true
or false
.LDAP authentication is supported if the 3PAR is configured to do so.
The HPE3PARFCDriver
and HPE3PARISCSIDriver
are installed with the
OpenStack software.
Install the python-3parclient
Python package on the OpenStack Block
Storage system.
$ pip install 'python-3parclient>=4.0,<5.0'
Verify that the HPE 3PAR Web Services API server is enabled and running on the HPE 3PAR storage system.
Log onto the HP 3PAR storage system with administrator access.
$ ssh 3paradm@<HP 3PAR IP Address>
View the current state of the Web Services API Server.
$ showwsapi
-Service- -State- -HTTP_State- HTTP_Port -HTTPS_State- HTTPS_Port -Version-
Enabled Active Enabled 8008 Enabled 8080 1.1
If the Web Services API Server is disabled, start it.
$ startwsapi
If the HTTP or HTTPS state is disabled, enable one of them.
$ setwsapi -http enable
or
$ setwsapi -https enable
Note
To stop the Web Services API Server, use the stopwsapi command. For other options run the setwsapi –h command.
If you are not using an existing CPG, create a CPG on the HPE 3PAR storage system to be used as the default location for creating volumes.
Make the following changes in the /etc/cinder/cinder.conf
file.
# 3PAR WS API Server URL
hpe3par_api_url=https://10.10.0.141:8080/api/v1
# 3PAR username with the 'edit' role
hpe3par_username=edit3par
# 3PAR password for the user specified in hpe3par_username
hpe3par_password=3parpass
# 3PAR CPG to use for volume creation
hpe3par_cpg=OpenStackCPG_RAID5_NL
# IP address of SAN controller for SSH access to the array
san_ip=10.10.22.241
# Username for SAN controller for SSH access to the array
san_login=3paradm
# Password for SAN controller for SSH access to the array
san_password=3parpass
# FIBRE CHANNEL(uncomment the next line to enable the FC driver)
# volume_driver=cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
# iSCSI (uncomment the next line to enable the iSCSI driver and
# hpe3par_iscsi_ips or iscsi_ip_address)
#volume_driver=cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver
# iSCSI multiple port configuration
# hpe3par_iscsi_ips=10.10.220.253:3261,10.10.222.234
# Still available for single port iSCSI configuration
#iscsi_ip_address=10.10.220.253
# Enable HTTP debugging to 3PAR
hpe3par_debug=False
# Enable CHAP authentication for iSCSI connections.
hpe3par_iscsi_chap_enabled=false
# The CPG to use for Snapshots for volumes. If empty hpe3par_cpg will be
# used.
hpe3par_snap_cpg=OpenStackSNAP_CPG
# Time in hours to retain a snapshot. You can't delete it before this
# expires.
hpe3par_snapshot_retention=48
# Time in hours when a snapshot expires and is deleted. This must be
# larger than retention.
hpe3par_snapshot_expiration=72
# The ratio of oversubscription when thin provisioned volumes are
# involved. Default ratio is 20.0, this means that a provisioned
# capacity can be 20 times of the total physical capacity.
max_over_subscription_ratio=20.0
# This flag represents the percentage of reserved back-end capacity.
reserved_percentage=15
Note
You can enable only one driver on each cinder instance unless you enable multiple back-end support. See the Cinder multiple back-end support instructions to enable this feature.
Note
You can configure one or more iSCSI addresses by using the
hpe3par_iscsi_ips
option. Separate multiple IP addresses with a
comma (,
). When you configure multiple addresses, the driver selects
the iSCSI port with the fewest active volumes at attach time. The 3PAR
array does not allow the default port 3260 to be changed, so IP ports
need not be specified.
Save the changes to the cinder.conf
file and restart the cinder-volume
service.
The HPE 3PAR Fibre Channel and iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.
The following table contains all the configuration options supported by the HPE 3PAR Fibre Channel and iSCSI drivers.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hpe3par_api_url = |
(String) 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 |
hpe3par_cpg = OpenStack |
(List) List of the CPG(s) to use for volume creation |
hpe3par_cpg_snap = |
(String) The CPG to use for Snapshots for volumes. If empty the userCPG will be used. |
hpe3par_debug = False |
(Boolean) Enable HTTP debugging to 3PAR |
hpe3par_iscsi_chap_enabled = False |
(Boolean) Enable CHAP authentication for iSCSI connections. |
hpe3par_iscsi_ips = |
(List) List of target iSCSI addresses to use. |
hpe3par_password = |
(String) 3PAR password for the user specified in hpe3par_username |
hpe3par_snapshot_expiration = |
(String) The time in hours when a snapshot expires and is deleted. This must be larger than expiration |
hpe3par_snapshot_retention = |
(String) The time in hours to retain a snapshot. You can’t delete it before this expires. |
hpe3par_username = |
(String) 3PAR username with the ‘edit’ role |
The HPELeftHandISCSIDriver
is based on the Block Storage service plug-in
architecture. Volume operations are run by communicating with the HPE
LeftHand/StoreVirtual system over HTTPS, or SSH connections. HTTPS
communications use the python-lefthandclient
, which is part of the Python
standard library.
The HPELeftHandISCSIDriver
can be configured to run using a REST client to
communicate with the array. For performance improvements and new functionality
the python-lefthandclient
must be downloaded, and HP LeftHand/StoreVirtual
Operating System software version 11.5 or higher is required on the array. To
configure the driver in standard mode, see
HPE LeftHand/StoreVirtual REST driver.
For information about how to manage HPE LeftHand/StoreVirtual storage systems, see the HPE LeftHand/StoreVirtual user documentation.
This section describes how to configure the HPE LeftHand/StoreVirtual Block Storage driver.
To use the HPE LeftHand/StoreVirtual driver, do the following:
python-lefthandclient
version 2.1.0 from the Python Package
Index on the system with the enabled Block Storage service
volume drivers.When you use back end assisted volume migration, both source and destination clusters must be in the same HPE LeftHand/StoreVirtual management group. The HPE LeftHand/StoreVirtual array will use native LeftHand APIs to migrate the volume. The volume cannot be attached or have snapshots to migrate.
Volume type support for the driver includes the ability to set the
following capabilities in the Block Storage API
cinder.api.contrib.types_extra_specs
volume type extra specs
extension module.
hpelh:provisioning
hpelh:ao
hpelh:data_pl
To work with the default filter scheduler, the key-value pairs are
case-sensitive and scoped with hpelh:
. For information about how to set
the key-value pairs and associate them with a volume type, run the following
command:
$ cinder help type-key
The following keys require the HPE LeftHand/StoreVirtual storage array be configured for:
hpelh:ao
The HPE LeftHand/StoreVirtual storage array must be configured for Adaptive Optimization.
hpelh:data_pl
The HPE LeftHand/StoreVirtual storage array must be able to support the Data Protection level specified by the extra spec.
If volume types are not used or a particular key is not set for a volume type, the following defaults are used:
hpelh:provisioning
Defaults to thin
provisioning, the valid values are, thin
and
full
hpelh:ao
Defaults to true
, the valid values are, true
and false
.
hpelh:data_pl
Defaults to r-0
, Network RAID-0 (None), the valid values are,
r-0
, Network RAID-0 (None)r-5
, Network RAID-5 (Single Parity)r-10-2
, Network RAID-10 (2-Way Mirror)r-10-3
, Network RAID-10 (3-Way Mirror)r-10-4
, Network RAID-10 (4-Way Mirror)r-6
, Network RAID-6 (Dual Parity)The HPELeftHandISCSIDriver
is installed with the OpenStack software.
Install the python-lefthandclient
Python package on the OpenStack Block
Storage system.
$ pip install 'python-lefthandclient>=2.1,<3.0'
If you are not using an existing cluster, create a cluster on the HPE LeftHand storage system to be used as the cluster for creating volumes.
Make the following changes in the /etc/cinder/cinder.conf
file:
# LeftHand WS API Server URL
hpelefthand_api_url=https://10.10.0.141:8081/lhos
# LeftHand Super user username
hpelefthand_username=lhuser
# LeftHand Super user password
hpelefthand_password=lhpass
# LeftHand cluster to use for volume creation
hpelefthand_clustername=ClusterLefthand
# LeftHand iSCSI driver
volume_driver=cinder.volume.drivers.hpe.hpe_lefthand_iscsi.HPELeftHandISCSIDriver
# Should CHAPS authentication be used (default=false)
hpelefthand_iscsi_chap_enabled=false
# Enable HTTP debugging to LeftHand (default=false)
hpelefthand_debug=false
# The ratio of oversubscription when thin provisioned volumes are
# involved. Default ratio is 20.0, this means that a provisioned capacity
# can be 20 times of the total physical capacity.
max_over_subscription_ratio=20.0
# This flag represents the percentage of reserved back-end capacity.
reserved_percentage=15
You can enable only one driver on each cinder instance unless you enable multiple back end support. See the Cinder multiple back end support instructions to enable this feature.
If the hpelefthand_iscsi_chap_enabled
is set to true
, the driver
will associate randomly-generated CHAP secrets with all hosts on the HPE
LeftHand/StoreVirtual system. OpenStack Compute nodes use these secrets
when creating iSCSI connections.
Important
CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.
Note
CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.
Save the changes to the cinder.conf
file and restart the
cinder-volume
service.
The HPE LeftHand/StoreVirtual driver is now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.
Note
Previous versions implement a HPE LeftHand/StoreVirtual CLIQ driver that enable the Block Storage service driver configuration in legacy mode. This is removed from Mitaka onwards.
The HPMSAFCDriver
and HPMSAISCSIDriver
Cinder drivers allow HP MSA
2040 or 1040 arrays to be used for Block Storage in OpenStack deployments.
To use the HP MSA drivers, the following are required:
Verify that the array can be managed via an HTTPS connection. HTTP can also
be used if hpmsa_api_protocol=http
is placed into the appropriate
sections of the cinder.conf
file.
Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.
If you plan to use vdisks instead of virtual pools, create or identify one or more vdisks to be used for OpenStack storage; typically this will mean creating or setting aside one disk group for each of the A and B controllers.
Edit the cinder.conf
file to define a storage back end entry for each
storage pool on the array that will be managed by OpenStack. Each entry
consists of a unique section name, surrounded by square brackets, followed
by options specified in a key=value
format.
hpmsa_backend_name
value specifies the name of the storage pool
or vdisk on the array.volume_backend_name
option value can be a unique value, if you
wish to be able to assign volumes to a specific storage pool on the
array, or a name that is shared among multiple storage pools to let the
volume scheduler choose where new volumes are allocated.manage
privileges; and the iSCSI IP addresses for the
array if using the iSCSI transport protocol.In the examples below, two back ends are defined, one for pool A and one for
pool B, and a common volume_backend_name
is used so that a single
volume type definition can be used to allocate volumes from both pools.
iSCSI example back-end entries
[pool-a]
hpmsa_backend_name = A
volume_backend_name = hpmsa-array
volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
[pool-b]
hpmsa_backend_name = B
volume_backend_name = hpmsa-array
volume_driver = cinder.volume.drivers.san.hp.hpmsa_iscsi.HPMSAISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
hpmsa_iscsi_ips = 10.2.3.4,10.2.3.5
Fibre Channel example back-end entries
[pool-a]
hpmsa_backend_name = A
volume_backend_name = hpmsa-array
volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
[pool-b]
hpmsa_backend_name = B
volume_backend_name = hpmsa-array
volume_driver = cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
If any volume_backend_name
value refers to a vdisk rather than a
virtual pool, add an additional statement hpmsa_backend_type = linear
to that back end entry.
If HTTPS is not enabled in the array, include hpmsa_api_protocol = http
in each of the back-end definitions.
If HTTPS is enabled, you can enable certificate verification with the option
hpmsa_verify_certificate=True
. You may also use the
hpmsa_verify_certificate_path
parameter to specify the path to a
CA_BUNDLE file containing CAs other than those in the default list.
Modify the [DEFAULT]
section of the cinder.conf
file to add an
enabled_back-ends
parameter specifying the backend entries you added,
and a default_volume_type
parameter specifying the name of a volume type
that you will create in the next step.
Example of [DEFAULT] section changes
[DEFAULT]
enabled_backends = pool-a,pool-b
default_volume_type = hpmsa
Create a new volume type for each distinct volume_backend_name
value
that you added in the cinder.conf
file. The example below assumes that
the same volume_backend_name=hpmsa-array
option was specified in all
of the entries, and specifies that the volume type hpmsa
can be used to
allocate volumes from any of them.
Example of creating a volume type
$ cinder type-create hpmsa
$ cinder type-key hpmsa set volume_backend_name=hpmsa-array
After modifying the cinder.conf
file, restart the cinder-volume
service.
The following table contains the configuration options that are specific to the HP MSA drivers.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hpmsa_api_protocol = https |
(String) HPMSA API interface protocol. |
hpmsa_backend_name = A |
(String) Pool or Vdisk name to use for volume creation. |
hpmsa_backend_type = virtual |
(String) linear (for Vdisk) or virtual (for Pool). |
hpmsa_iscsi_ips = |
(List) List of comma-separated target iSCSI IP addresses. |
hpmsa_verify_certificate = False |
(Boolean) Whether to verify HPMSA array SSL certificate. |
hpmsa_verify_certificate_path = None |
(String) HPMSA array SSL certificate path. |
Huawei volume driver can be used to provide functions such as the logical volume and snapshot for virtual machines (VMs) in the OpenStack Block Storage driver that supports iSCSI and Fibre Channel protocols.
The following table describes the version mappings among the Block Storage driver, Huawei storage system and OpenStack:
Description | Storage System Version |
---|---|
Create, delete, expand, attach, detach, manage, and unmanage volumes. Create, delete, manage, unmanage, and backup a snapshot. Create, delete, and update a consistency group. Create and delete a cgsnapshot. Copy an image to a volume. Copy a volume to an image. Create a volume from a snapshot. Clone a volume. QoS |
OceanStor T series V2R2 C00/C20/C30 OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00 OceanStor 2200V3 V300R005C00 OceanStor 2600V3 V300R005C00 OceanStor 18500/18800 V1R1C00/C20/C30 V3R3C00 |
Volume Migration Auto zoning SmartTier SmartCache Smart Thin/Thick Replication V2.1 |
OceanStor T series V2R2 C00/C20/C30 OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00 OceanStor 2200V3 V300R005C00 OceanStor 2600V3 V300R005C00 OceanStor 18500/18800V1R1C00/C20/C30 |
SmartPartition | OceanStor T series V2R2 C00/C20/C30 OceanStor V3 V3R1C10/C20 V3R2C10 V3R3C00 OceanStor 2600V3 V300R005C00 OceanStor 18500/18800V1R1C00/C20/C30 |
Before installation, delete all the installation files of Huawei OpenStack
Driver. The default path may be:
/usr/lib/python2.7/disk-packages/cinder/volume/drivers/huawei
.
Note
In this example, the version of Python is 2.7. If another version is used, make corresponding changes to the driver path.
Copy the Block Storage driver to the Block Storage driver installation directory. Refer to step 1 to find the default directory.
Refer to chapter Volume driver configuration to complete the configuration.
After configuration, restart the cinder-volume
service:
Check the status of services using the cinder service-list
command. If the State
of cinder-volume
is up
, that means
cinder-volume
is okay.
# cinder service-list
+-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
| cinderscheduler | controller | nova | enabled | up | 2016-02-01T16:26:00.000000 | - |
+-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
| cindervolume | controller@v3r3 | nova | enabled | up | 2016-02-01T16:25:53.000000 | - |
+-----------------+-----------------+------+---------+-------+----------------------------+-----------------+
This section describes how to configure the Huawei volume driver for either iSCSI storage or Fibre Channel storage.
Pre-requisites
When creating a volume from image, install the multipath
tool and add the
following configuration keys in the [DEFAULT]
configuration group of
the /etc/cinder/cinder.conf
file:
use_multipath_for_image_xfer = True
enforce_multipath_for_image_xfer = True
To configure the volume driver, follow the steps below:
In /etc/cinder
, create a Huawei-customized driver configuration file.
The file format is XML.
Change the name of the driver configuration file based on the site
requirements, for example, cinder_huawei_conf.xml
.
Configure parameters in the driver configuration file.
Each product has its own value for the Product
parameter under the
Storage
xml block. The full xml file with the appropriate Product
parameter is as below:
<?xml version="1.0" encoding="UTF-8"?>
<config>
<Storage>
<Product>PRODUCT</Product>
<Protocol>iSCSI</Protocol>
<ControllerIP1>x.x.x.x</ControllerIP1>
<UserName>xxxxxxxx</UserName>
<UserPassword>xxxxxxxx</UserPassword>
</Storage>
<LUN>
<LUNType>xxx</LUNType>
<StripUnitSize>xxx</StripUnitSize>
<WriteType>xxx</WriteType>
<MirrorSwitch>xxx</MirrorSwitch>
<Prefetch Type="xxx" Value="xxx" />
<StoragePool Name="xxx" />
<StoragePool Name="xxx" />
</LUN>
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
<Host OSType="Linux" HostIP="x.x.x.x, x.x.x.x"/>
</config>
The corresponding ``Product`` values for each product are as below:
For T series V2
<Product>TV2</Product>
For V3
<Product>V3</Product>
For OceanStor 18000 series
<Product>18000</Product>
The Protocol
value to be used is iSCSI
for iSCSI and FC
for
Fibre Channel as shown below:
# For iSCSI
<Protocol>iSCSI</Protocol>
# For Fibre channel
<Protocol>FC</Protocol>
Note
For details about the parameters in the configuration file, see the Configuration file parameters section.
Configure the cinder.conf
file.
In the [default]
block of /etc/cinder/cinder.conf
, add the following
contents:
volume_driver
indicates the loaded driver.cinder_huawei_conf_file
indicates the specified Huawei-customized
configuration file.hypermetro_devices
indicates the list of remote storage devices for
which Hypermetro is to be used.The added content in the [default]
block of /etc/cinder/cinder.conf
with the appropriate volume_driver
and the list of
remote storage devices
values for each product is as below:
volume_driver = VOLUME_DRIVER
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
hypermetro_devices = {STORAGE_DEVICE1, STORAGE_DEVICE2....}
Note
By default, the value for hypermetro_devices
is None
.
The volume-driver
value for every product is as below:
# For iSCSI
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiISCSIDriver
# For FC
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
Run the service cinder-volume restart command to restart the Block Storage service.
To configure iSCSI Multipathing, follow the steps below:
Create a port group on the storage device using the DeviceManager
and add
service links that require multipathing into the port group.
Log in to the storage device using CLI commands and enable the multiport discovery switch in the multipathing.
developer:/>change iscsi discover_multiport switch=on
Add the port group settings in the Huawei-customized driver configuration file and configure the port group name needed by an initiator.
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxx" TargetPortGroup="xxxx" />
</iSCSI>
Enable the multipathing switch of the Compute service module.
Add iscsi_use_multipath = True
in [libvirt]
of
/etc/nova/nova.conf
.
Run the service nova-compute restart command to restart the
nova-compute
service.
On a public network, any application server whose IP address resides on the
same network segment as that of the storage systems iSCSI host port can access
the storage system and perform read and write operations in it. This poses
risks to the data security of the storage system. To ensure the storage
systems access security, you can configure CHAP
authentication to control
application servers access to the storage system.
Adjust the driver configuration file as follows:
<Initiator ALUA="xxx" CHAPinfo="xxx" Name="xxx" TargetIP="x.x.x.x"/>
ALUA
indicates a multipathing mode. 0 indicates that ALUA
is disabled.
1 indicates that ALUA
is enabled. CHAPinfo
indicates the user name and
password authenticated by CHAP
. The format is mmuser; mm-user@storage
.
The user name and password are separated by semicolons (;
).
Multiple storage systems configuration example:
enabled_backends = v3_fc, 18000_fc
[v3_fc]
volume_driver = cinder.volume.drivers.huawei.huawei_t.HuaweiFCDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_v3_fc.xml
volume_backend_name = HuaweiTFCDriver
[18000_fc]
volume_driver = cinder.volume.drivers.huawei.huawei_driver.HuaweiFCDriver
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf_18000_fc.xml
volume_backend_name = HuaweiFCDriver
This section describes mandatory and optional configuration file parameters of the Huawei volume driver.
Parameter | Default value | Description | Applicable to |
---|---|---|---|
Product | - |
Type of a storage product. Possible values are TV2 , 18000 and
V3 . |
All |
Protocol | - |
Type of a connection protocol. The possible value is either 'iSCSI'
or 'FC' . |
All |
RestURL | - |
Access address of the REST interface,
https://x.x.x.x/devicemanager/rest/ . The value x.x.x.x indicates
the management IP address. OceanStor 18000 uses the preceding setting,
and V2 and V3 requires you to add port number 8088 , for example,
https://x.x.x.x:8088/deviceManager/rest/ . If you need to configure
multiple RestURL, separate them by semicolons (;). |
T series V2 V3 18000 |
UserName | - |
User name of a storage administrator. | All |
UserPassword | - |
Password of a storage administrator. | All |
StoragePool | - |
Name of a storage pool to be used. If you need to configure multiple
storage pools, separate them by semicolons (; ). |
All |
Note
The value of StoragePool
cannot contain Chinese characters.
Parameter | Default value | Description | Applicable to |
---|---|---|---|
LUNType | Thin | Type of the LUNs to be created. The value can be Thick or Thin . |
All |
WriteType | 1 | Cache write type, possible values are: 1 (write back), 2
(write through), and 3 (mandatory write back). |
All |
MirrorSwitch | 1 | Cache mirroring or not, possible values are: 0 (without mirroring)
or 1 (with mirroring). |
All |
LUNcopyWaitInterval | 5 | After LUN copy is enabled, the plug-in frequently queries the copy progress. You can set a value to specify the query interval. | T series V2 V3 18000 |
Timeout | 432000 | Timeout interval for waiting LUN copy of a storage device to complete. The unit is second. | T series V2 V3 18000 |
Initiator Name | - |
Name of a compute node initiator. | All |
Initiator TargetIP | - |
IP address of the iSCSI port provided for compute nodes. | All |
Initiator TargetPortGroup | - |
IP address of the iSCSI target port that is provided for compute nodes. | T series V2 V3 18000 |
DefaultTargetIP | - |
Default IP address of the iSCSI target port that is provided for compute nodes. | All |
OSType | Linux | Operating system of the Nova compute node’s host. | All |
HostIP | - |
IP address of the Nova compute node’s host. | All |
Important
The Initiator Name
, Initiator TargetIP
, and
Initiator TargetPortGroup
are ISCSI
parameters and therefore not
applicable to FC
.
IBM General Parallel File System (GPFS) is a cluster file system that provides concurrent access to file systems from multiple nodes. The storage provided by these nodes can be direct attached, network attached, SAN attached, or a combination of these methods. GPFS provides many features beyond common data access, including data replication, policy based storage management, and space efficient file snapshot and clone operations.
The GPFS driver enables the use of GPFS in a fashion similar to that of the NFS driver. With the GPFS driver, instances do not actually access a storage device at the block level. Instead, volume backing files are created in a GPFS file system and mapped to instances, which emulate a block device.
Note
GPFS software must be installed and running on nodes where Block
Storage and Compute services run in the OpenStack environment. A
GPFS file system must also be created and mounted on these nodes
before starting the cinder-volume
service. The details of these
GPFS specific steps are covered in GPFS: Concepts, Planning, and
Installation Guide and GPFS: Administration and Programming
Reference.
Optionally, the Image service can be configured to store images on a GPFS file system. When a Block Storage volume is created from an image, if both image data and volume data reside in the same GPFS file system, the data from image file is moved efficiently to the volume file using copy-on-write optimization strategy.
To use the Block Storage service with the GPFS driver, first set the
volume_driver
in the cinder.conf
file:
volume_driver = cinder.volume.drivers.ibm.gpfs.GPFSDriver
The following table contains the configuration options supported by the GPFS driver.
Note
The gpfs_images_share_mode
flag is only valid if the Image
Service is configured to use GPFS with the gpfs_images_dir
flag.
When the value of this flag is copy_on_write
, the paths
specified by the gpfs_mount_point_base
and gpfs_images_dir
flags must both reside in the same GPFS file system and in the same
GPFS file set.
It is possible to specify additional volume configuration options on a per-volume basis by specifying volume metadata. The volume is created using the specified options. Changing the metadata after the volume is created has no effect. The following table lists the volume creation options supported by the GPFS volume driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
gpfs_images_dir = None |
(String) Specifies the path of the Image service repository in GPFS. Leave undefined if not storing images in GPFS. |
gpfs_images_share_mode = None |
(String) Specifies the type of image copy to be used. Set this when the Image service repository also uses GPFS so that image files can be transferred efficiently from the Image service to the Block Storage service. There are two valid values: “copy” specifies that a full copy of the image is made; “copy_on_write” specifies that copy-on-write optimization strategy is used and unmodified blocks of the image file are shared efficiently. |
gpfs_max_clone_depth = 0 |
(Integer) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots or clones. A lengthy chain of copy-on-write snapshots or clones can have a negative impact on performance, but improves space utilization. 0 indicates unlimited clone depth. |
gpfs_mount_point_base = None |
(String) Specifies the path of the GPFS directory where Block Storage volume and snapshot files are stored. |
gpfs_sparse_volumes = True |
(Boolean) Specifies that volumes are created as sparse files which initially consume no space. If set to False, the volume is created as a fully allocated file, in which case, creation may take a significantly longer time. |
gpfs_storage_pool = system |
(String) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used. |
nas_host = |
(String) IP address or Hostname of NAS system. |
nas_login = admin |
(String) User name to connect to NAS system. |
nas_password = |
(String) Password to connect to NAS system. |
nas_private_key = |
(String) Filename of private key to use for SSH authentication. |
nas_ssh_port = 22 |
(Port number) SSH port to use to connect to NAS system. |
This example shows the creation of a 50GB volume with an ext4
file
system labeled newfs
and direct IO enabled:
$ cinder create --metadata fstype=ext4 fslabel=newfs dio=yes --display-name volume_1 50
Volume snapshots are implemented using the GPFS file clone feature. Whenever a new snapshot is created, the snapshot file is efficiently created as a read-only clone parent of the volume, and the volume file uses copy-on-write optimization strategy to minimize data movement.
Similarly when a new volume is created from a snapshot or from an
existing volume, the same approach is taken. The same approach is also
used when a new volume is created from an Image service image, if the
source image is in raw format, and gpfs_images_share_mode
is set to
copy_on_write
.
The GPFS driver supports encrypted volume back end feature.
To encrypt a volume at rest, specify the extra specification
gpfs_encryption_rest = True
.
The volume management driver for Storwize family and SAN Volume Controller (SVC) provides OpenStack Compute instances with access to IBM Storwize family or SVC storage systems.
The Storwize family or SVC system must be configured for iSCSI, Fibre Channel, or both.
If using iSCSI, each Storwize family or SVC node should have at least one iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address associated with the volume’s preferred node (if available) to attach the volume to the instance, otherwise it uses the first available iSCSI IP address of the system. The driver obtains the iSCSI IP address directly from the storage system. You do not need to provide these iSCSI IP addresses directly to the driver.
Note
If using iSCSI, ensure that the compute nodes have iSCSI network access to the Storwize family or SVC system.
If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. The driver uses all available WWPNs to attach the volume to the instance. The driver obtains the WWPNs directly from the storage system. You do not need to provide these WWPNs directly to the driver.
Note
If using FC, ensure that the compute nodes have FC connectivity to the Storwize family or SVC system.
If using iSCSI for data access and the
storwize_svc_iscsi_chap_enabled
is set to True
, the driver will
associate randomly-generated CHAP secrets with all hosts on the Storwize
family system. The compute nodes use these secrets when creating
iSCSI connections.
Warning
CHAP secrets are added to existing hosts as well as newly-created ones. If the CHAP option is enabled, hosts will not be able to access the storage without the generated secrets.
Note
Not all OpenStack Compute drivers support CHAP authentication. Please check compatibility before using.
Note
CHAP secrets are passed from OpenStack Block Storage to Compute in clear text. This communication should be secured to ensure that CHAP secrets are not discovered.
The IBM Storwize/SVC driver can allocate volumes in multiple pools.
The pools should be created in advance and be provided to the driver
using the storwize_svc_volpool_name
configuration flag in the form
of a comma-separated list.
For the complete list of configuration flags, see Storwize family and SVC driver options in cinder.conf.
The driver requires access to the Storwize family or SVC system
management interface. The driver communicates with the management using
SSH. The driver should be provided with the Storwize family or SVC
management IP using the san_ip
flag, and the management port should
be provided by the san_ssh_port
flag. By default, the port value is
configured to be port 22 (SSH). Also, you can set the secondary
management IP using the storwize_san_secondary_ip
flag.
Note
Make sure the compute node running the cinder-volume management driver has SSH network access to the storage system.
To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have an Administrator role. It is suggested to create a new user for the management driver. Please consult with your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.
Note
When creating a new user on the Storwize or SVC system, make sure the user belongs to the Administrator group or to another group that has an Administrator role.
If using password authentication, assign a password to the user on the
Storwize or SVC system. The driver configuration flags for the user and
password are san_login
and san_password
, respectively.
If you are using the SSH key pair authentication, create SSH private and
public keys using the instructions below or by any other method.
Associate the public key with the user by uploading the public key:
select the choose file option in the Storwize family or SVC
management GUI under SSH public key. Alternatively, you may
associate the SSH public key using the command-line interface; details can
be found in the Storwize and SVC documentation. The private key should be
provided to the driver using the san_private_key
configuration flag.
You can create an SSH key pair using OpenSSH, by running:
$ ssh-keygen -t rsa
The command prompts for a file to save the key pair. For example, if you
select key
as the filename, two files are created: key
and
key.pub
. The key
file holds the private SSH key and key.pub
holds the public SSH key.
The command also prompts for a pass phrase, which should be empty.
The private key file should be provided to the driver using the
san_private_key
configuration flag. The public key should be
uploaded to the Storwize family or SVC system using the storage
management GUI or command-line interface.
Note
Ensure that Cinder has read permissions on the private key file.
Set the volume driver to the Storwize family and SVC driver by setting
the volume_driver
option in the cinder.conf
file as follows:
iSCSI:
volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_iscsi.StorwizeSVCISCSIDriver
FC:
volume_driver = cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver
The following options specify default values for all volumes. Some can be over-ridden using volume types, which are described below.
Flag name | Type | Default | Description |
---|---|---|---|
san_ip |
Required | Management IP or host name | |
san_ssh_port |
Optional | 22 | Management port |
san_login |
Required | Management login username | |
san_password |
Required [1] | Management login password | |
san_private_key |
Required | Management login SSH private key | |
storwize_svc_volpool_name |
Required | Default pool name for volumes | |
storwize_svc_vol_rsize |
Optional | 2 | Initial physical allocation (percentage) [2] |
storwize_svc_vol_warning |
Optional | 0 (disabled) | Space allocation warning threshold (percentage) |
storwize_svc_vol_autoexpand |
Optional | True | Enable or disable volume auto expand [3] |
storwize_svc_vol_grainsize |
Optional | 256 | Volume grain size in KB |
storwize_svc_vol_compression |
Optional | False | Enable or disable Real-time Compression [4] |
storwize_svc_vol_easytier |
Optional | True | Enable or disable Easy Tier [5] |
storwize_svc_vol_iogrp |
Optional | 0 | The I/O group in which to allocate vdisks |
storwize_svc_flashcopy_timeout |
Optional | 120 | FlashCopy timeout threshold [6] (seconds) |
storwize_svc_iscsi_chap_enabled |
Optional | True | Configure CHAP authentication for iSCSI connections |
storwize_svc_multihost_enabled |
Optional | True | Enable mapping vdisks to multiple hosts [7] |
storwize_svc_vol_nofmtdisk |
Optional | False | Enable or disable fast format [8] |
[1] | The authentication requires either a password (san_password ) or
SSH private key (san_private_key ). One must be specified. If both
are specified, the driver uses only the SSH private key. |
[2] | The driver creates thin-provisioned volumes by default. The
storwize_svc_vol_rsize flag defines the initial physical
allocation percentage for thin-provisioned volumes, or if set to
-1 , the driver creates full allocated volumes. More details about
the available options are available in the Storwize family and SVC
documentation. |
[3] | Defines whether thin-provisioned volumes can be auto expanded by the
storage system, a value of True means that auto expansion is
enabled, a value of False disables auto expansion. Details about
this option can be found in the –autoexpand flag of the Storwize
family and SVC command line interface mkvdisk command. |
[4] | Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this feature to work. |
[5] | Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work. |
[6] | The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait time of 600 seconds (10 minutes). |
[7] | This option allows the driver to map a vdisk to more than one host at
a time. This scenario occurs during migration of a virtual machine
with an attached volume; the volume is simultaneously mapped to both
the source and destination compute hosts. If your deployment does not
require attaching vdisks to multiple hosts, setting this flag to
False will provide added safety. |
[8] | Defines whether or not the fast formatting of thick-provisioned
volumes is disabled at creation. The default value is False and a
value of True means that fast format is disabled. Details about
this option can be found in the –nofmtdisk flag of the Storwize
family and SVC command-line interface mkvdisk command. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
storwize_san_secondary_ip = None |
(String) Specifies secondary management IP or hostname to be used if san_ip is invalid or becomes inaccessible. |
storwize_svc_allow_tenant_qos = False |
(Boolean) Allow tenants to specify QOS on create |
storwize_svc_flashcopy_rate = 50 |
(Integer) Specifies the Storwize FlashCopy copy rate to be used when creating a full volume copy. The default is rate is 50, and the valid rates are 1-100. |
storwize_svc_flashcopy_timeout = 120 |
(Integer) Maximum number of seconds to wait for FlashCopy to be prepared. |
storwize_svc_iscsi_chap_enabled = True |
(Boolean) Configure CHAP authentication for iSCSI connections (Default: Enabled) |
storwize_svc_multihostmap_enabled = True |
(Boolean) DEPRECATED: This option no longer has any affect. It is deprecated and will be removed in the next release. |
storwize_svc_multipath_enabled = False |
(Boolean) Connect with multipath (FC only; iSCSI multipath is controlled by Nova) |
storwize_svc_stretched_cluster_partner = None |
(String) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are stored.Example: “pool2” |
storwize_svc_vol_autoexpand = True |
(Boolean) Storage system autoexpand parameter for volumes (True/False) |
storwize_svc_vol_compression = False |
(Boolean) Storage system compression option for volumes |
storwize_svc_vol_easytier = True |
(Boolean) Enable Easy Tier for volumes |
storwize_svc_vol_grainsize = 256 |
(Integer) Storage system grain size parameter for volumes (32/64/128/256) |
storwize_svc_vol_iogrp = 0 |
(Integer) The I/O group in which to allocate volumes |
storwize_svc_vol_nofmtdisk = False |
(Boolean) Specifies that the volume not be formatted during creation. |
storwize_svc_vol_rsize = 2 |
(Integer) Storage system space-efficiency parameter for volumes (percentage) |
storwize_svc_vol_warning = 0 |
(Integer) Storage system threshold for volume capacity warnings (percentage) |
storwize_svc_volpool_name = volpool |
(List) Comma separated list of storage system storage pools for volumes. |
The IBM Storwize/SVC driver exposes capabilities that can be added to
the extra specs
of volume types, and used by the filter
scheduler to determine placement of new volumes. Make sure to prefix
these keys with capabilities:
to indicate that the scheduler should
use them. The following extra specs
are supported:
capabilities:volume_back-end_name
- Specify a specific back-end
where the volume should be created. The back-end name is a
concatenation of the name of the IBM Storwize/SVC storage system as
shown in lssystem
, an underscore, and the name of the pool (mdisk
group). For example:
capabilities:volume_back-end_name=myV7000_openstackpool
capabilities:compression_support
- Specify a back-end according to
compression support. A value of True
should be used to request a
back-end that supports compression, and a value of False
will
request a back-end that does not support compression. If you do not
have constraints on compression support, do not set this key. Note
that specifying True
does not enable compression; it only
requests that the volume be placed on a back-end that supports
compression. Example syntax:
capabilities:compression_support='<is> True'
capabilities:easytier_support
- Similar semantics as the
compression_support
key, but for specifying according to support
of the Easy Tier feature. Example syntax:
capabilities:easytier_support='<is> True'
capabilities:storage_protocol
- Specifies the connection protocol
used to attach volumes of this type to instances. Legal values are
iSCSI
and FC
. This extra specs
value is used for both placement
and setting the protocol used for this volume. In the example syntax,
note <in>
is used as opposed to <is>
which is used in the
previous examples.
capabilities:storage_protocol='<in> FC'
Volume types can also be used to pass options to the IBM Storwize/SVC
driver, which over-ride the default values set in the configuration
file. Contrary to the previous examples where the capabilities
scope
was used to pass parameters to the Cinder scheduler, options can be
passed to the IBM Storwize/SVC driver with the drivers
scope.
The following extra specs
keys are supported by the IBM Storwize/SVC
driver:
These keys have the same semantics as their counterparts in the
configuration file. They are set similarly; for example, rsize=2
or
compression=False
.
In the following example, we create a volume type to specify a controller that supports iSCSI and compression, to use iSCSI when attaching the volume, and to enable compression:
$ cinder type-create compressed
$ cinder type-key compressed set capabilities:storage_protocol='<in> iSCSI' capabilities:compression_support='<is> True' drivers:compression=True
We can then create a 50GB volume using this type:
$ cinder create --display-name "compressed volume" --volume-type compressed 50
Volume types can be used, for example, to provide users with different
The Storwize driver provides QOS support for storage volumes by
controlling the I/O amount. QOS is enabled by editing the
etc/cinder/cinder.conf
file and setting the
storwize_svc_allow_tenant_qos
to True
.
There are three ways to set the Storwize IOThrotting
parameter for
storage volumes:
qos:IOThrottling
key into a QOS specification and
associate it with a volume type.qos:IOThrottling
key into an extra specification with a
volume type.qos:IOThrottling
key to the storage volume metadata.Note
If you are changing a volume type with QOS to a new volume type without QOS, the QOS configuration settings will be removed.
In the context of OpenStack Block Storage’s volume migration feature, the IBM Storwize/SVC driver enables the storage’s virtualization technology. When migrating a volume from one pool to another, the volume will appear in the destination pool almost immediately, while the storage moves the data in the background.
Note
To enable this feature, both pools involved in a given volume
migration must have the same values for extent_size
. If the
pools have different values for extent_size
, the data will still
be moved directly between the pools (not host-side copy), but the
operation will be synchronous.
The IBM Storwize/SVC driver allows for extending a volume’s size, but only for volumes without snapshots.
Snapshots are implemented using FlashCopy with no background copy (space-efficient). Volume clones (volumes created from existing volumes) are implemented with FlashCopy, but with background copy enabled. This means that volume clones are independent, full copies. While this background copy is taking place, attempting to delete or extend the source volume will result in that operation waiting for the copy to complete.
The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:
Note
When you change the rsize
, grainsize
or compression
properties, volume copies are asynchronously synchronized on the
array.
Note
To change the iogrp
property, IBM Storwize/SVC firmware version
6.4.0 or later is required.
The IBM Storage Driver for OpenStack is a Block Storage driver that supports IBM XIV, IBM Spectrum Accelerate, IBM FlashSystem A9000, IBM FlashSystem A9000R and IBM DS8000 storage systems over Fiber channel and iSCSI.
Set the following in your cinder.conf
file, and use the following options
to configure it.
volume_driver = cinder.volume.drivers.ibm.ibm_storage.IBMStorageDriver
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
proxy = storage.proxy.IBMStorageProxy |
(String) Proxy driver that connects to the IBM Storage Array |
san_clustername = |
(String) Cluster name to use for creating volumes |
san_ip = |
(String) IP address of SAN controller |
san_login = admin |
(String) Username for SAN controller |
san_password = |
(String) Password for SAN controller |
Note
To use the IBM Storage Driver for OpenStack you must download and install the package. For more information, see IBM Support Portal - Select Fixes.
For full documentation, see IBM Knowledge Center.
The volume driver for FlashSystem provides OpenStack Block Storage hosts with access to IBM FlashSystems.
The volume driver requires a pre-defined array. You must create an array on the FlashSystem before using the volume driver. An existing array can also be used and existing data will not be deleted.
Note
FlashSystem can only create one array, so no configuration option is needed for the IBM FlashSystem driver to assign it.
The driver requires access to the FlashSystem management interface using
SSH. It should be provided with the FlashSystem management IP using the
san_ip
flag, and the management port should be provided by the
san_ssh_port
flag. By default, the port value is configured to be
port 22 (SSH).
Note
Make sure the compute node running the cinder-volume
driver has SSH
network access to the storage system.
Using password authentication, assign a password to the user on the FlashSystem. For more detail, see the driver configuration flags for the user and password here: Enable IBM FlashSystem FC driver or Enable IBM FlashSystem iSCSI driver.
Using Fiber Channel (FC), each FlashSystem node should have at least one
WWPN port configured. If the flashsystem_multipath_enabled
flag is
set to True
in the Block Storage service configuration file, the driver
uses all available WWPNs to attach the volume to the instance. If the flag is
not set, the driver uses the WWPN associated with the volume’s preferred node
(if available). Otherwise, it uses the first available WWPN of the system. The
driver obtains the WWPNs directly from the storage system. You do not need to
provide these WWPNs to the driver.
Note
Using FC, ensure that the block storage hosts have FC connectivity to the FlashSystem.
Set the volume driver to the FlashSystem driver by setting the
volume_driver
option in the cinder.conf
configuration file,
as follows:
volume_driver = cinder.volume.drivers.ibm.flashsystem_fc.FlashSystemFCDriver
To enable the IBM FlashSystem FC driver, configure the following options in the
cinder.conf
configuration file:
Flag name | Type | Default | Description |
---|---|---|---|
san_ip |
Required | Management IP or host name | |
san_ssh_port |
Optional | 22 | Management port |
san_login |
Required | Management login user name | |
san_password |
Required | Management login password | |
flashsystem_connection_protocol |
Required | Connection protocol should be set to FC |
|
flashsystem_multipath_enabled |
Required | Enable multipath for FC connections | |
flashsystem_multihost_enabled |
Optional | True |
Enable mapping vdisks to multiple hosts [1] |
[1] | This option allows the driver to map a vdisk to more than one host at
a time. This scenario occurs during migration of a virtual machine
with an attached volume; the volume is simultaneously mapped to both
the source and destination compute hosts. If your deployment does not
require attaching vdisks to multiple hosts, setting this flag to
False will provide added safety. |
Using iSCSI, each FlashSystem node should have at least one iSCSI port configured. iSCSI IP addresses of IBM FlashSystem can be obtained by FlashSystem GUI or CLI. For more information, see the appropriate IBM Redbook for the FlashSystem.
Note
Using iSCSI, ensure that the compute nodes have iSCSI network access to the IBM FlashSystem.
Set the volume driver to the FlashSystem driver by setting the
volume_driver
option in the cinder.conf
configuration file, as
follows:
volume_driver = cinder.volume.drivers.ibm.flashsystem_iscsi.FlashSystemISCSIDriver
To enable IBM FlashSystem iSCSI driver, configure the following options
in the cinder.conf
configuration file:
Flag name | Type | Default | Description |
---|---|---|---|
san_ip |
Required | Management IP or host name | |
san_ssh_port |
Optional | 22 | Management port |
san_login |
Required | Management login user name | |
san_password |
Required | Management login password | |
flashsystem_connection_protocol |
Required | Connection protocol should be set to iSCSI |
|
flashsystem_multihost_enabled |
Optional | True |
Enable mapping vdisks to multiple hosts [2] |
iscsi_ip_address |
Required | Set to one of the iSCSI IP addresses obtained by FlashSystem GUI or CLI [3] | |
flashsystem_iscsi_portid |
Required | Set to the id of the iscsi_ip_address obtained by FlashSystem GUI or CLI [4] |
[2] | This option allows the driver to map a vdisk to more than one host at
a time. This scenario occurs during migration of a virtual machine
with an attached volume; the volume is simultaneously mapped to both
the source and destination compute hosts. If your deployment does not
require attaching vdisks to multiple hosts, setting this flag to
False will provide added safety. |
[3] | On the cluster of the FlashSystem, the iscsi_ip_address column is the
seventh column IP_address of the output of lsportip . |
[4] | On the cluster of the FlashSystem, port ID column is the first
column id of the output of lsportip ,
not the sixth column port_id . |
These operations are supported:
The DISCO driver supports the following features:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
clone_check_timeout = 3600 |
(Integer) How long we check whether a clone is finished before we give up |
clone_volume_timeout = 680 |
(Integer) Create clone volume timeout. |
disco_client = 127.0.0.1 |
(IP) The IP of DMS client socket server |
disco_client_port = 9898 |
(Port number) The port to connect DMS client socket server |
disco_wsdl_path = /etc/cinder/DISCOService.wsdl |
(String) Path to the wsdl file to communicate with DISCO request manager |
restore_check_timeout = 3600 |
(Integer) How long we check whether a restore is finished before we give up |
retry_interval = 1 |
(Integer) How long we wait before retrying to get an item detail |
Kaminario’s K2 all-flash array leverages a unique software-defined architecture that delivers highly valued predictable performance, scalability and cost-efficiency.
Kaminario’s K2 all-flash iSCSI and FC arrays can be used in
OpenStack Block Storage for providing block storage using
KaminarioISCSIDriver
class and KaminarioFCDriver
class respectively.
krest
python library should be installed on the Block Storage node
using sudo pip install krestEdit the /etc/cinder/cinder.conf
file and define a configuration
group for iSCSI/FC back end.
[DEFAULT]
enabled_backends = kaminario
# Use DriverFilter in combination of other filters to use 'filter_function'
# scheduler_default_filters = DriverFilter,CapabilitiesFilter
[kaminario]
# Management IP of Kaminario K2 All-Flash iSCSI/FC array
san_ip = 10.0.0.10
# Management username of Kaminario K2 All-Flash iSCSI/FC array
san_login = username
# Management password of Kaminario K2 All-Flash iSCSI/FC array
san_password = password
# Enable Kaminario K2 iSCSI/FC driver
volume_driver = cinder.volume.drivers.kaminario.kaminario_iscsi.KaminarioISCSIDriver
# volume_driver = cinder.volume.drivers.kaminario.kaminario_fc.KaminarioFCDriver
# Backend name
volume_backend_name = kaminario
# K2 driver calculates max_oversubscription_ratio on setting below
# option as True. Default value is False
# auto_calc_max_oversubscription_ratio = False
# Set a limit on total number of volumes to be created on K2 array, for example:
# filter_function = "capabilities.total_volumes < 250"
# For replication, replication_device must be set and the replication peer must be configured
# on the primary and the secondary K2 arrays
# Syntax:
# replication_device = backend_id:<s-array-ip>,login:<s-username>,password:<s-password>,rpo:<value>
# where:
# s-array-ip is the secondary K2 array IP
# rpo must be either 60(1 min) or multiple of 300(5 min)
# Example:
# replication_device = backend_id:10.0.0.50,login:kaminario,password:kaminario,rpo:300
# Suppress requests library SSL certificate warnings on setting this option as True
# Default value is 'False'
# suppress_requests_ssl_warnings = False
Save the changes to the /etc/cinder/cinder.conf
file and
restart the cinder-volume
service.
The following table contains the configuration options that are specific to the Kaminario K2 FC and iSCSI Block Storage drivers.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
kaminario_nodedup_substring = K2-nodedup |
(String) DEPRECATED: If volume-type name contains this substring nodedup volume will be created, otherwise dedup volume wil be created. This option is deprecated in favour of ‘kaminario:thin_prov_type’ in extra-specs and will be removed in the next release. |
The LenovoFCDriver
and LenovoISCSIDriver
Cinder drivers allow
Lenovo S3200 or S2200 arrays to be used for block storage in OpenStack
deployments.
To use the Lenovo drivers, the following are required:
Verify that the array can be managed using an HTTPS connection. HTTP can
also be used if lenovo_api_protocol=http
is placed into the
appropriate sections of the cinder.conf
file.
Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.
Edit the cinder.conf
file to define a storage back-end entry for
each storage pool on the array that will be managed by OpenStack. Each
entry consists of a unique section name, surrounded by square brackets,
followed by options specified in key=value
format.
lenovo_backend_name
value specifies the name of the storage
pool on the array.volume_backend_name
option value can be a unique value, if
you wish to be able to assign volumes to a specific storage pool on
the array, or a name that’s shared among multiple storage pools to
let the volume scheduler choose where new volumes are allocated.manage
privileges; and the iSCSI IP
addresses for the array if using the iSCSI transport protocol.In the examples below, two back ends are defined, one for pool A and one
for pool B, and a common volume_backend_name
is used so that a
single volume type definition can be used to allocate volumes from both
pools.
Example: iSCSI example back-end entries
[pool-a]
lenovo_backend_name = A
volume_backend_name = lenovo-array
volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
[pool-b]
lenovo_backend_name = B
volume_backend_name = lenovo-array
volume_driver = cinder.volume.drivers.lenovo.lenovo_iscsi.LenovoISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
lenovo_iscsi_ips = 10.2.3.4,10.2.3.5
Example: Fibre Channel example back-end entries
[pool-a]
lenovo_backend_name = A
volume_backend_name = lenovo-array
volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
[pool-b]
lenovo_backend_name = B
volume_backend_name = lenovo-array
volume_driver = cinder.volume.drivers.lenovo.lenovo_fc.LenovoFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
If HTTPS is not enabled in the array, include
lenovo_api_protocol = http
in each of the back-end definitions.
If HTTPS is enabled, you can enable certificate verification with the
option lenovo_verify_certificate=True
. You may also use the
lenovo_verify_certificate_path
parameter to specify the path to a
CA_BUNDLE file containing CAs other than those in the default list.
Modify the [DEFAULT]
section of the cinder.conf
file to add an
enabled_backends
parameter specifying the back-end entries you added,
and a default_volume_type
parameter specifying the name of a volume
type that you will create in the next step.
Example: [DEFAULT] section changes
[DEFAULT]
...
enabled_backends = pool-a,pool-b
default_volume_type = lenovo
...
Create a new volume type for each distinct volume_backend_name
value
that you added to the cinder.conf
file. The example below
assumes that the same volume_backend_name=lenovo-array
option was specified in all of the
entries, and specifies that the volume type lenovo
can be used to
allocate volumes from any of them.
Example: Creating a volume type
$ cinder type-create lenovo
$ cinder type-key lenovo set volume_backend_name=lenovo-array
After modifying the cinder.conf
file,
restart the cinder-volume
service.
The following table contains the configuration options that are specific to the Lenovo drivers.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
lenovo_api_protocol = https |
(String) Lenovo api interface protocol. |
lenovo_backend_name = A |
(String) Pool or Vdisk name to use for volume creation. |
lenovo_backend_type = virtual |
(String) linear (for VDisk) or virtual (for Pool). |
lenovo_iscsi_ips = |
(List) List of comma-separated target iSCSI IP addresses. |
lenovo_verify_certificate = False |
(Boolean) Whether to verify Lenovo array SSL certificate. |
lenovo_verify_certificate_path = None |
(String) Lenovo array SSL certificate path. |
The NetApp unified driver is a Block Storage driver that supports multiple storage families and protocols. A storage family corresponds to storage systems built on different NetApp technologies such as clustered Data ONTAP, Data ONTAP operating in 7-Mode, and E-Series. The storage protocol refers to the protocol used to initiate data storage and access operations on those storage systems like iSCSI and NFS. The NetApp unified driver can be configured to provision and manage OpenStack volumes on a given storage family using a specified storage protocol. Also, the NetApp unified driver supports over subscription or over provisioning when thin provisioned Block Storage volumes are in use on an E-Series backend. The OpenStack volumes can then be used for accessing and storing data using the storage protocol on the storage family system. The NetApp unified driver is an extensible interface that can support new storage families and protocols.
Note
With the Juno release of OpenStack, Block Storage has introduced the concept of storage pools, in which a single Block Storage back end may present one or more logical storage resource pools from which Block Storage will select a storage location when provisioning volumes.
In releases prior to Juno, the NetApp unified driver contained some scheduling logic that determined which NetApp storage container (namely, a FlexVol volume for Data ONTAP, or a dynamic disk pool for E-Series) that a new Block Storage volume would be placed into.
With the introduction of pools, all scheduling logic is performed completely within the Block Storage scheduler, as each NetApp storage container is directly exposed to the Block Storage scheduler as a storage pool. Previously, the NetApp unified driver presented an aggregated view to the scheduler and made a final placement decision as to which NetApp storage container the Block Storage volume would be provisioned into.
The NetApp clustered Data ONTAP storage family represents a configuration group which provides Compute instances access to clustered Data ONTAP storage systems. At present it can be configured in Block Storage to work with iSCSI and NFS storage protocols.
The NetApp iSCSI configuration for clustered Data ONTAP is an interface from OpenStack to clustered Data ONTAP storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN that can be accessed using the iSCSI protocol.
The iSCSI configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by
setting the volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None |
(String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_lun_space_reservation = enabled |
(String) This option determines if storage space is reserved for LUN allocation. If enabled, LUNs are thick provisioned. If space reservation is disabled, storage space is allocated on demand. |
netapp_partner_backend_name = None |
(String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_size_multiplier = 1.2 |
(Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None |
(String) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None |
(String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
Note
If you specify an account in the netapp_login
that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
Note
The driver supports iSCSI CHAP uni-directional authentication.
To enable it, set the use_chap_auth
option to True
.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
The NetApp NFS configuration for clustered Data ONTAP is an interface from OpenStack to a clustered Data ONTAP system for provisioning and managing OpenStack volumes on NFS exports provided by the clustered Data ONTAP system that are accessed using the NFS protocol.
The NFS configuration for clustered Data ONTAP is a direct interface from Block Storage to the clustered Data ONTAP instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options
Configure the volume driver, storage family, and storage protocol to NetApp
unified driver, clustered Data ONTAP, and NFS respectively by setting the
volume_driver
, netapp_storage_family
, and netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
expiry_thres_minutes = 720 |
(Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
netapp_copyoffload_tool_path = None |
(String) This option specifies the path of the NetApp copy offload tool binary. Ensure that the binary has execute permissions set which allow the effective user of the cinder-volume process to execute the file. |
netapp_host_type = None |
(String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_host_type = None |
(String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_lun_ostype = None |
(String) This option defines the type of operating system that will access a LUN exported from Data ONTAP; it is assigned to the LUN at the time it is created. |
netapp_partner_backend_name = None |
(String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None |
(String) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vserver = None |
(String) This option specifies the virtual storage server (Vserver) name on the storage cluster on which provisioning of block storage volumes should occur. |
thres_avl_size_perc_start = 20 |
(Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
thres_avl_size_perc_stop = 60 |
(Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. These options can be found here: Description of NFS storage configuration options.
Note
If you specify an account in the netapp_login
that only has
virtual storage server (Vserver) administration privileges (rather
than cluster-wide administration privileges), some advanced features
of the NetApp unified driver will not work and you may see warnings
in the Block Storage logs.
A feature was added in the Icehouse release of the NetApp unified driver that enables Image service images to be efficiently copied to a destination Block Storage volume. When the Block Storage and Image service are configured to use the NetApp NFS Copy Offload client, a controller-side copy will be attempted before reverting to downloading the image from the Image service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage services. This is due to the copy operation being performed completely within the storage cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
To use this feature, you must configure the Image service, as follows:
default_store
configuration option to file
.filesystem_store_datadir
configuration option to the path
to the Image service NFS export.show_image_direct_url
configuration option to True
.show_multiple_locations
configuration option to True
.filesystem_store_metadata_file
configuration option to a metadata
file. The metadata file should contain a JSON object that contains the
correct information about the NFS export used by the Image service.To use this feature, you must configure the Block Storage service, as follows:
Set the netapp_copyoffload_tool_path
configuration option to the path to
the NetApp Copy Offload binary.
Set the glance_api_version
configuration option to 2
.
Important
This feature requires that:
Tip
To download the NetApp copy offload binary to be utilized in conjunction
with the netapp_copyoffload_tool_path
configuration option, please visit
the Utility Toolchest page at the NetApp Support portal
(login is required).
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with a clustered Data ONTAP storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure QoS, mirroring, or compression for a storage back end.
Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the cinder type-key command.
Extra spec | Type | Description |
---|---|---|
netapp_raid_type |
String | Limit the candidate volume list based on one of the following raid
types: raid4, raid_dp . |
netapp_disk_type |
String | Limit the candidate volume list based on one of the following disk
types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA,
XSAS, or SSD. |
netapp:qos_policy_group [1] |
String | Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the OpenStack Block Storage volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before an OpenStack Block Storage volume is created, and that the QoS policy group is not associated with the destination FlexVol volume. |
netapp_mirrored |
Boolean | Limit the candidate volume list to only the ones that are mirrored on the storage controller. |
netapp_unmirrored [2] |
Boolean | Limit the candidate volume list to only the ones that are not mirrored on the storage controller. |
netapp_dedup |
Boolean | Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller. |
netapp_nodedup |
Boolean | Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller. |
netapp_compression |
Boolean | Limit the candidate volume list to only the ones that have compression enabled on the storage controller. |
netapp_nocompression |
Boolean | Limit the candidate volume list to only the ones that have compression disabled on the storage controller. |
netapp_thin_provisioned |
Boolean | Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. |
netapp_thick_provisioned |
Boolean | Limit the candidate volume list to only the ones that support thick provisioning on the storage controller. |
[1] | Please note that this extra spec has a colon (: ) in its name
because it is used by the driver to assign the QoS policy group to
the OpenStack Block Storage volume after it has been provisioned. |
[2] | In the Juno release, these negative-assertion extra specs are
formally deprecated by the NetApp unified driver. Instead of using
the deprecated negative-assertion extra specs (for example,
netapp_unmirrored ) with a value of true , use the
corresponding positive-assertion extra spec (for example,
netapp_mirrored ) with a value of false . |
The NetApp Data ONTAP operating in 7-Mode storage family represents a configuration group which provides Compute instances access to 7-Mode storage systems. At present it can be configured in Block Storage to work with iSCSI and NFS storage protocols.
The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage systems for provisioning and managing the SAN block storage entity, that is, a LUN which can be accessed using iSCSI protocol.
The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct interface from OpenStack to Data ONTAP operating in 7-Mode storage system and it does not require additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options
Configure the volume driver, storage family and storage protocol to the NetApp
unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by
setting the volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None |
(String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_size_multiplier = 1.2 |
(Floating point) The quantity to be multiplied by the requested volume size to ensure enough space is available on the virtual storage server (Vserver) to fulfill the volume creation request. Note: this option is deprecated and will be removed in favor of “reserved_percentage” in the Mitaka release. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None |
(String) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vfiler = None |
(String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. |
Note
The driver supports iSCSI CHAP uni-directional authentication.
To enable it, set the use_chap_auth
option to True
.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an interface from OpenStack to Data ONTAP operating in 7-Mode storage system for provisioning and managing OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7-Mode storage system which can then be accessed using NFS protocol.
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from Block Storage to the Data ONTAP operating in 7-Mode instance and as such does not require any additional management software to achieve the desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options
Configure the volume driver, storage family, and storage protocol to the NetApp
unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting
the volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
expiry_thres_minutes = 720 |
(Integer) This option specifies the threshold for last access time for images in the NFS image cache. When a cache cleaning cycle begins, images in the cache that have not been accessed in the last M minutes, where M is the value of this parameter, will be deleted from the cache to create free space on the NFS share. |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None |
(String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_storage_protocol = None |
(String) The storage protocol to be used on the data path with the storage system. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_vfiler = None |
(String) The vFiler unit on which provisioning of block storage volumes will be done. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode. Only use this option when utilizing the MultiStore feature on the NetApp storage system. |
thres_avl_size_perc_start = 20 |
(Integer) If the percentage of available space for an NFS share has dropped below the value specified by this option, the NFS image cache will be cleaned. |
thres_avl_size_perc_stop = 60 |
(Integer) When the percentage of available space on an NFS share has reached the percentage specified by this option, the driver will stop clearing files from the NFS image cache that have not been accessed in the last M minutes, where M is the value of the expiry_thres_minutes configuration option. |
Note
Additional NetApp NFS configuration options are shared with the generic NFS driver. For a description of these, see Description of NFS storage configuration options.
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
The NetApp E-Series storage family represents a configuration group which provides OpenStack compute instances access to E-Series storage systems. At present it can be configured in Block Storage to work with the iSCSI storage protocol.
The NetApp iSCSI configuration for E-Series is an interface from OpenStack to E-Series storage systems. It provisions and manages the SAN block storage entity, which is a NetApp LUN which can be accessed using the iSCSI protocol.
The iSCSI configuration for E-Series is an interface from Block Storage to the E-Series proxy instance and as such requires the deployment of the proxy instance in order to achieve the desired functionality. The driver uses REST APIs to interact with the E-Series proxy instance, which in turn interacts directly with the E-Series controllers.
The use of multipath and DM-MP are required when using the Block Storage driver for E-Series. In order for Block Storage and OpenStack Compute to take advantage of multiple paths, the following configuration options must be correctly configured:
use_multipath_for_image_xfer
option should be set to True
in the
cinder.conf
file within the driver-specific stanza (for example,
[myDriver]
).iscsi_use_multipath
option should be set to True
in the
nova.conf
file within the [libvirt]
stanza.Configuration options
Configure the volume driver, storage family, and storage protocol to the
NetApp unified driver, E-Series, and iSCSI respectively by setting the
volume_driver
, netapp_storage_family
and
netapp_storage_protocol
options in the cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True
Note
To use the E-Series driver, you must override the default value of
netapp_storage_family
with eseries
.
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol
with iscsi
.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
netapp_controller_ips = None |
(String) This option is only utilized when the storage family is configured to eseries. This option is used to restrict provisioning to the specified controllers. Specify the value of this option to be a comma separated list of controller hostnames or IP addresses to be used for provisioning. |
netapp_enable_multiattach = False |
(Boolean) This option specifies whether the driver should allow operations that require multiple attachments to a volume. An example would be live migration of servers that have volumes attached. When enabled, this backend is limited to 256 total volumes in order to guarantee volumes can be accessed by more than one host. |
netapp_host_type = None |
(String) This option defines the type of operating system for all initiators that can access a LUN. This information is used when mapping LUNs to individual hosts or groups of hosts. |
netapp_login = None |
(String) Administrative user account name used to access the storage system or proxy server. |
netapp_partner_backend_name = None |
(String) The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner. This option is only used by the driver when connecting to an instance with a storage family of Data ONTAP operating in 7-Mode, and it is required if the storage protocol selected is FC. |
netapp_password = None |
(String) Password for the administrative user account specified in the netapp_login option. |
netapp_pool_name_search_pattern = (.+) |
(String) This option is used to restrict provisioning to the specified pools. Specify the value of this option to be a regular expression which will be applied to the names of objects from the storage backend which represent pools in Cinder. This option is only utilized when the storage protocol is configured to use iSCSI or FC. |
netapp_replication_aggregate_map = None |
(Unknown) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... |
netapp_sa_password = None |
(String) Password for the NetApp E-Series storage array. |
netapp_server_hostname = None |
(String) The hostname (or IP address) for the storage system or proxy server. |
netapp_server_port = None |
(Integer) The TCP port to use for communication with the storage system or proxy server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. |
netapp_snapmirror_quiesce_timeout = 3600 |
(Integer) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
netapp_storage_family = ontap_cluster |
(String) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series. |
netapp_transport_type = http |
(String) The transport protocol used when communicating with the storage system or proxy server. |
netapp_webservice_path = /devmgr/v2 |
(String) This option is used to specify the path to the E-Series proxy application on a proxy server. The value is combined with the value of the netapp_transport_type, netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to the proxy application. |
Tip
For more information on these options and other deployment and operational scenarios, visit the NetApp OpenStack Deployment and Operations Guide.
Extra specs enable vendors to specify extra filter criteria. The Block Storage scheduler uses the specs when the scheduler determines which volume node should fulfill a volume provisioning request. When you use the NetApp unified driver with an E-Series storage system, you can leverage extra specs with Block Storage volume types to ensure that Block Storage volumes are created on storage back ends that have certain properties. An example of this is when you configure thin provisioning for a storage back end.
Extra specs are associated with Block Storage volume types. When users request volumes of a particular volume type, the volumes are created on storage back ends that meet the list of requirements. An example of this is the back ends that have the available space or extra specs. Use the specs in the following table to configure volumes. Define Block Storage volume types by using the cinder type-key command.
Extra spec | Type | Description |
---|---|---|
netapp_thin_provisioned |
Boolean | Limit the candidate volume list to only the ones that support thin provisioning on the storage controller. |
NetApp introduced a new unified block storage driver in Havana for configuring different storage families and storage protocols. This requires defining an upgrade path for NetApp drivers which existed in releases prior to Havana. This section covers the upgrade configuration for NetApp drivers to the new unified configuration and a list of deprecated NetApp drivers.
This section describes how to update Block Storage configuration from a pre-Havana release to the unified driver format.
NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier):
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirectCmodeISCSIDriver
NetApp unified driver configuration:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier):
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
NetApp unified driver configuration:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier):
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppDirect7modeISCSIDriver
NetApp unified driver configuration:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier):
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
NetApp unified driver configuration:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
This section lists the NetApp drivers in earlier releases that are deprecated in Havana.
NetApp iSCSI driver for clustered Data ONTAP:
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppCmodeISCSIDriver
NetApp NFS driver for clustered Data ONTAP:
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppCmodeNfsDriver
NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller:
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller:
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
Note
For support information on deprecated NetApp drivers in the Havana release, visit the NetApp OpenStack Deployment and Operations Guide.
Nimble Storage fully integrates with the OpenStack platform through the Nimble Cinder driver, allowing a host to configure and manage Nimble Storage array features through Block Storage interfaces.
Support for the Liberty release is available from Nimble OS 2.3.8 or later.
Note
The Nimble Storage implementation uses iSCSI only. Fibre Channel is not supported.
Update the file /etc/cinder/cinder.conf
with the given configuration.
In case of a basic (single back-end) configuration, add the parameters
within the [default]
section as follows.
[default]
san_ip = NIMBLE_MGMT_IP
san_login = NIMBLE_USER
san_password = NIMBLE_PASSWORD
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver
In case of multiple back-end configuration, for example, configuration which supports multiple Nimble Storage arrays or a single Nimble Storage array with arrays from other vendors, use the following parameters.
[default]
enabled_backends = Nimble-Cinder
[Nimble-Cinder]
san_ip = NIMBLE_MGMT_IP
san_login = NIMBLE_USER
san_password = NIMBLE_PASSWORD
volume_driver = cinder.volume.drivers.nimble.NimbleISCSIDriver
volume_backend_name = NIMBLE_BACKEND_NAME
In case of multiple back-end configuration, Nimble Storage volume type is created and associated with a back-end name as follows.
Note
Single back-end configuration users do not need to create the volume type.
$ cinder type-create NIMBLE_VOLUME_TYPE
$ cinder type-key NIMBLE_VOLUME_TYPE set volume_backend_name=NIMBLE_BACKEND_NAME
This section explains the variables used above:
power user
(admin) privilege
if RBAC is used.cinder.conf
file.
This is also used while assigning a back-end name to the Nimble volume type.The Nimble volume-type which is created from the CLI and associated with
NIMBLE_BACKEND_NAME
.
Note
Restart the cinder-api
, cinder-scheduler
, and cinder-volume
services after updating the cinder.conf
file.
The Nimble volume driver also supports the following extra spec options:
These extra-specs can be enabled by using the following command:
$ cinder type-key VOLUME_TYPE set KEY=VALUE
VOLUME_TYPE
is the Nimble volume type and KEY
and VALUE
are
the options mentioned above.
NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) platform delivering unified file (NFS and SMB) and block (FC and iSCSI) storage services, runs on industry standard hardware, scales from tens of terabytes to petabyte configurations, and includes all data management functionality by default.
For NexentaStor 4.x user documentation, visit https://nexenta.com/products/downloads/nexentastor.
The Nexenta iSCSI driver allows you to use a NexentaStor appliance to store Compute volumes. Every Compute volume is represented by a single zvol in a predefined Nexenta namespace. The Nexenta iSCSI volume driver should work with all versions of NexentaStor.
The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A volume and an enclosing namespace must be created for all iSCSI volumes to be accessed through the volume driver. This should be done as specified in the release-specific NexentaStor documentation.
The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple backend volume drivers.
You must configure these items for each NexentaStor appliance that the iSCSI volume driver controls:
Make the following changes on the volume node /etc/cinder/cinder.conf
file.
# Enable Nexenta iSCSI driver
volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
# IP address of NexentaStor host (string value)
nexenta_host=HOST-IP
# Username for NexentaStor REST (string value)
nexenta_user=USERNAME
# Port for Rest API (integer value)
nexenta_rest_port=8457
# Password for NexentaStor REST (string value)
nexenta_password=PASSWORD
# Volume on NexentaStor appliance (string value)
nexenta_volume=volume_name
Note
nexenta_volume represents a zpool which is called volume on NS appliance. It must be pre-created before enabling the driver.
/etc/cinder/cinder.conf
file and
restart the cinder-volume
service.The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS. Every Compute volume is represented by a single NFS file within a shared directory.
While the NFS protocols standardize file access for users, they do not standardize administrative actions such as taking snapshots or replicating file systems. The OpenStack Volume Drivers bring a common interface to these operations. The Nexenta NFS driver implements these standard actions using the ZFS management plane that is already deployed on NexentaStor appliances.
The Nexenta NFS volume driver should work with all versions of NexentaStor. The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A single-parent file system must be created for all virtual disk directories supported for OpenStack. This directory must be created and exported on each NexentaStor appliance. This should be done as specified in the release- specific NexentaStor documentation.
You must configure these items for each NexentaStor appliance that the NFS volume driver controls:
Make the following changes on the volume node /etc/cinder/cinder.conf
file.
# Enable Nexenta NFS driver
volume_driver=cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
# Path to shares config file
nexenta_shares_config=/home/ubuntu/shares.cfg
Note
Add your list of Nexenta NFS servers to the file you specified with the
nexenta_shares_config
option. For example, this is how this file should look:
192.168.1.200:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.200:8457
192.168.1.201:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.201:8457
192.168.1.202:/volumes/VOLUME_NAME/NFS_SHARE http://USER:PASSWORD@192.168.1.202:8457
Each line in this file represents an NFS share. The first part of the line is the NFS share URL, the second line is the connection URL to the NexentaStor Appliance.
Nexenta Driver supports these options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nexenta_blocksize = 4096 |
(Integer) Block size for datasets |
nexenta_chunksize = 32768 |
(Integer) NexentaEdge iSCSI LUN object chunk size |
nexenta_client_address = |
(String) NexentaEdge iSCSI Gateway client address for non-VIP service |
nexenta_dataset_compression = on |
(String) Compression value for new ZFS folders. |
nexenta_dataset_dedup = off |
(String) Deduplication value for new ZFS folders. |
nexenta_dataset_description = |
(String) Human-readable description for the folder. |
nexenta_host = |
(String) IP address of Nexenta SA |
nexenta_iscsi_target_portal_port = 3260 |
(Integer) Nexenta target portal port |
nexenta_mount_point_base = $state_path/mnt |
(String) Base directory that contains NFS share mount points |
nexenta_nbd_symlinks_dir = /dev/disk/by-path |
(String) NexentaEdge logical path of directory to store symbolic links to NBDs |
nexenta_nms_cache_volroot = True |
(Boolean) If set True cache NexentaStor appliance volroot option value. |
nexenta_password = nexenta |
(String) Password to connect to Nexenta SA |
nexenta_rest_port = 8080 |
(Integer) HTTP port to connect to Nexenta REST API server |
nexenta_rest_protocol = auto |
(String) Use http or https for REST connection (default auto) |
nexenta_rrmgr_compression = 0 |
(Integer) Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression. |
nexenta_rrmgr_connections = 2 |
(Integer) Number of TCP connections. |
nexenta_rrmgr_tcp_buf_size = 4096 |
(Integer) TCP Buffer size in KiloBytes. |
nexenta_shares_config = /etc/cinder/nfs_shares |
(String) File with the list of available nfs shares |
nexenta_sparse = False |
(Boolean) Enables or disables the creation of sparse datasets |
nexenta_sparsed_volumes = True |
(Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time. |
nexenta_target_group_prefix = cinder/ |
(String) Prefix for iSCSI target groups on SA |
nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder- |
(String) IQN prefix for iSCSI targets |
nexenta_user = admin |
(String) User name to connect to Nexenta SA |
nexenta_volume = cinder |
(String) SA Pool that holds all volumes |
NexentaStor is an Open Source-driven Software-Defined Storage (OpenSDS) platform delivering unified file (NFS and SMB) and block (FC and iSCSI) storage services. NexentaStor runs on industry standard hardware, scales from tens of terabytes to petabyte configurations, and includes all data management functionality by default.
For NexentaStor user documentation, visit: http://docs.nexenta.com/.
The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A pool and an enclosing namespace must be created for all iSCSI volumes to be accessed through the volume driver. This should be done as specified in the release-specific NexentaStor documentation.
The NexentaStor Appliance iSCSI driver is selected using the normal procedures for one or multiple back-end volume drivers.
You must configure these items for each NexentaStor appliance that the iSCSI volume driver controls:
Make the following changes on the volume node /etc/cinder/cinder.conf
file.
# Enable Nexenta iSCSI driver
volume_driver=cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver
# IP address of NexentaStor host (string value)
nexenta_host=HOST-IP
# Port for Rest API (integer value)
nexenta_rest_port=8080
# Username for NexentaStor Rest (string value)
nexenta_user=USERNAME
# Password for NexentaStor Rest (string value)
nexenta_password=PASSWORD
# Pool on NexentaStor appliance (string value)
nexenta_volume=volume_name
# Name of a parent Volume group where cinder created zvols will reside (string value)
nexenta_volume_group = iscsi
Note
nexenta_volume represents a zpool, which is called pool on NS 5.x appliance. It must be pre-created before enabling the driver.
Volume group does not need to be pre-created, the driver will create it if does not exist.
Save the changes to the /etc/cinder/cinder.conf
file and
restart the cinder-volume
service.
The Nexenta NFS driver allows you to use NexentaStor appliance to store Compute volumes via NFS. Every Compute volume is represented by a single NFS file within a shared directory.
While the NFS protocols standardize file access for users, they do not standardize administrative actions such as taking snapshots or replicating file systems. The OpenStack Volume Drivers bring a common interface to these operations. The Nexenta NFS driver implements these standard actions using the ZFS management plane that already is deployed on NexentaStor appliances.
The NexentaStor appliance must be installed and configured according to the relevant Nexenta documentation. A single-parent file system must be created for all virtual disk directories supported for OpenStack. Create and export the directory on each NexentaStor appliance.
You must configure these items for each NexentaStor appliance that the NFS volume driver controls:
Make the following changes on the volume node /etc/cinder/cinder.conf
file.
# Enable Nexenta NFS driver
volume_driver=cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver
# IP address or Hostname of NexentaStor host (string value)
nas_host=HOST-IP
# Port for Rest API (integer value)
nexenta_rest_port=8080
# Path to parent filesystem (string value)
nas_share_path=POOL/FILESYSTEM
# Specify NFS version
nas_mount_options=vers=4
Create filesystem on appliance and share via NFS. For example:
"securityContexts": [
{"readWriteList": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
"root": [{"allow": true, "etype": "fqnip", "entity": "1.1.1.1"}],
"securityModes": ["sys"]}]
Create ACL for the filesystem. For example:
{"type": "allow",
"principal": "everyone@",
"permissions": ["list_directory","read_data","add_file","write_data",
"add_subdirectory","append_data","read_xattr","write_xattr","execute",
"delete_child","read_attributes","write_attributes","delete","read_acl",
"write_acl","write_owner","synchronize"],
"flags": ["file_inherit","dir_inherit"]}
Nexenta Driver supports these options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nexenta_dataset_compression = on |
(String) Compression value for new ZFS folders. |
nexenta_dataset_dedup = off |
(String) Deduplication value for new ZFS folders. |
nexenta_dataset_description = |
(String) Human-readable description for the folder. |
nexenta_host = |
(String) IP address of Nexenta SA |
nexenta_iscsi_target_portal_port = 3260 |
(Integer) Nexenta target portal port |
nexenta_mount_point_base = $state_path/mnt |
(String) Base directory that contains NFS share mount points |
nexenta_ns5_blocksize = 32 |
(Integer) Block size for datasets |
nexenta_rest_port = 8080 |
(Integer) HTTP port to connect to Nexenta REST API server |
nexenta_rest_protocol = auto |
(String) Use http or https for REST connection (default auto) |
nexenta_sparse = False |
(Boolean) Enables or disables the creation of sparse datasets |
nexenta_sparsed_volumes = True |
(Boolean) Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time. |
nexenta_user = admin |
(String) User name to connect to Nexenta SA |
nexenta_volume = cinder |
(String) SA Pool that holds all volumes |
nexenta_volume_group = iscsi |
(String) Volume group for ns5 |
NexentaEdge is designed from the ground-up to deliver high performance Block and Object storage services and limitless scalability to next generation OpenStack clouds, petabyte scale active archives and Big Data applications. NexentaEdge runs on shared nothing clusters of industry standard Linux servers, and builds on Nexenta IP and patent pending Cloud Copy On Write (CCOW) technology to break new ground in terms of reliability, functionality and cost efficiency.
For NexentaEdge user documentation, visit http://docs.nexenta.com.
The NexentaEdge cluster must be installed and configured according to the relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created, as well as an iSCSI service on the NexentaEdge gateway node.
The NexentaEdge iSCSI driver is selected using the normal procedures for one or multiple back-end volume drivers.
You must configure these items for each NexentaEdge cluster that the iSCSI volume driver controls:
Make the following changes on the volume node /etc/cinder/cinder.conf
file.
# Enable Nexenta iSCSI driver
volume_driver = cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver
# Specify the ip address for Rest API (string value)
nexenta_rest_address = MANAGEMENT-NODE-IP
# Port for Rest API (integer value)
nexenta_rest_port=8080
# Protocol used for Rest calls (string value, default=htpp)
nexenta_rest_protocol = http
# Username for NexentaEdge Rest (string value)
nexenta_user=USERNAME
# Password for NexentaEdge Rest (string value)
nexenta_password=PASSWORD
# Path to bucket containing iSCSI LUNs (string value)
nexenta_lun_container = CLUSTER/TENANT/BUCKET
# Name of pre-created iSCSI service (string value)
nexenta_iscsi_service = SERVICE-NAME
# IP address of the gateway node attached to iSCSI service above or
# virtual IP address if an iSCSI Storage Service Group is configured in
# HA mode (string value)
nexenta_client_address = GATEWAY-NODE-IP
Save the changes to the /etc/cinder/cinder.conf
file and
restart the cinder-volume
service.
As an alternative to using iSCSI, Amazon S3, or Openstack Swift protocols, NexentaEdge can provide access to cluster storage via a Network Block Device (NBD) interface.
The NexentaEdge cluster must be installed and configured according to the relevant Nexenta documentation. A cluster, tenant, bucket must be pre-created. The driver requires NexentaEdge Service to run on Hypervisor Node (Nova) node. The node must sit on Replicast Network and only runs NexentaEdge service, does not require physical disks.
You must configure these items for each NexentaEdge cluster that the NBD volume driver controls:
Make the following changes on data node /etc/cinder/cinder.conf
file.
# Enable Nexenta NBD driver
volume_driver = cinder.volume.drivers.nexenta.nexentaedge.nbd.NexentaEdgeNBDDriver
# Specify the ip address for Rest API (string value)
nexenta_rest_address = MANAGEMENT-NODE-IP
# Port for Rest API (integer value)
nexenta_rest_port = 8080
# Protocol used for Rest calls (string value, default=htpp)
nexenta_rest_protocol = http
# Username for NexentaEdge Rest (string value)
nexenta_rest_user = USERNAME
# Password for NexentaEdge Rest (string value)
nexenta_rest_password = PASSWORD
# Path to bucket containing iSCSI LUNs (string value)
nexenta_lun_container = CLUSTER/TENANT/BUCKET
# Path to directory to store symbolic links to block devices
# (string value, default=/dev/disk/by-path)
nexenta_nbd_symlinks_dir = /PATH/TO/SYMBOLIC/LINKS
Save the changes to the /etc/cinder/cinder.conf
file and
restart the cinder-volume
service.
Nexenta Driver supports these options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nexenta_blocksize = 4096 |
(Integer) Block size for datasets |
nexenta_chunksize = 32768 |
(Integer) NexentaEdge iSCSI LUN object chunk size |
nexenta_client_address = |
(String) NexentaEdge iSCSI Gateway client address for non-VIP service |
nexenta_iscsi_service = |
(String) NexentaEdge iSCSI service name |
nexenta_iscsi_target_portal_port = 3260 |
(Integer) Nexenta target portal port |
nexenta_lun_container = |
(String) NexentaEdge logical path of bucket for LUNs |
nexenta_rest_address = |
(String) IP address of NexentaEdge management REST API endpoint |
nexenta_rest_password = nexenta |
(String) Password to connect to NexentaEdge |
nexenta_rest_port = 8080 |
(Integer) HTTP port to connect to Nexenta REST API server |
nexenta_rest_protocol = auto |
(String) Use http or https for REST connection (default auto) |
nexenta_rest_user = admin |
(String) User name to connect to NexentaEdge |
ProhetStor Fibre Channel and iSCSI drivers add support for ProphetStor Flexvisor through the Block Storage service. ProphetStor Flexvisor enables commodity x86 hardware as software-defined storage leveraging well-proven ZFS for disk management to provide enterprise grade storage services such as snapshots, data protection with different RAID levels, replication, and deduplication.
The DPLFCDriver
and DPLISCSIDriver
drivers run volume operations
by communicating with the ProphetStor storage system over HTTPS.
The DPLFCDriver
and DPLISCSIDriver
are installed with the OpenStack
software.
Query storage pool id to configure dpl_pool
of the cinder.conf
file.
Log on to the storage system with administrator access.
$ ssh root@STORAGE_IP_ADDRESS
View the current usable pool id.
$ flvcli show pool list
- d5bd40b58ea84e9da09dcf25a01fdc07 : default_pool_dc07
Use d5bd40b58ea84e9da09dcf25a01fdc07
to configure the dpl_pool
of
/etc/cinder/cinder.conf
file.
Note
Other management commands can be referenced with the help command flvcli -h.
Make the following changes on the volume node /etc/cinder/cinder.conf
file.
# IP address of SAN controller (string value)
san_ip=STORAGE IP ADDRESS
# Username for SAN controller (string value)
san_login=USERNAME
# Password for SAN controller (string value)
san_password=PASSWORD
# Use thin provisioning for SAN volumes? (boolean value)
san_thin_provision=true
# The port that the iSCSI daemon is listening on. (integer value)
iscsi_port=3260
# DPL pool uuid in which DPL volumes are stored. (string value)
dpl_pool=d5bd40b58ea84e9da09dcf25a01fdc07
# DPL port number. (integer value)
dpl_port=8357
# Uncomment one of the next two option to enable Fibre channel or iSCSI
# FIBRE CHANNEL(uncomment the next line to enable the FC driver)
#volume_driver=cinder.volume.drivers.prophetstor.dpl_fc.DPLFCDriver
# iSCSI (uncomment the next line to enable the iSCSI driver)
#volume_driver=cinder.volume.drivers.prophetstor.dpl_iscsi.DPLISCSIDriver
Save the changes to the /etc/cinder/cinder.conf
file and
restart the cinder-volume
service.
The ProphetStor Fibre Channel or iSCSI drivers are now enabled on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.
The following table contains the options supported by the ProphetStor storage driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dpl_pool = |
(String) DPL pool uuid in which DPL volumes are stored. |
dpl_port = 8357 |
(Port number) DPL port number. |
iscsi_port = 3260 |
(Port number) The port that the iSCSI daemon is listening on |
san_ip = |
(String) IP address of SAN controller |
san_login = admin |
(String) Username for SAN controller |
san_password = |
(String) Password for SAN controller |
san_thin_provision = True |
(Boolean) Use thin provisioning for SAN volumes? |
The Pure Storage FlashArray volume drivers for OpenStack Block Storage interact with configured Pure Storage arrays and support various operations.
Support for iSCSI storage protocol is available with the PureISCSIDriver Volume Driver class, and Fibre Channel with PureFCDriver.
All drivers are compatible with Purity FlashArrays that support the REST API version 1.2, 1.3, or 1.4 (Purity 4.0.0 and newer).
If you do not set up the nodes hosting instances to use multipathing, all network connectivity will use a single physical port on the array. In addition to significantly limiting the available bandwidth, this means you do not have the high-availability and non-disruptive upgrade benefits provided by FlashArray. Multipathing must be used to take advantage of these benefits.
You need to configure both your Purity array and your OpenStack cluster.
Note
These instructions assume that the cinder-api
and cinder-scheduler
services are installed and configured in your OpenStack cluster.
In these steps, you will edit the cinder.conf
file to configure the
OpenStack Block Storage service to enable multipathing and to use the
Pure Storage FlashArray as back-end storage.
Install Pure Storage PyPI module. A requirement for the Pure Storage driver is the installation of the Pure Storage Python SDK version 1.4.0 or later from PyPI.
$ pip install purestorage
Retrieve an API token from Purity. The OpenStack Block Storage service configuration requires an API token from Purity. Actions performed by the volume driver use this token for authorization. Also, Purity logs the volume driver’s actions as being performed by the user who owns this API token.
If you created a Purity user account that is dedicated to managing your OpenStack Block Storage volumes, copy the API token from that user account.
Use the appropriate create or list command below to display and copy the Purity API token:
To create a new API token:
$ pureadmin create --api-token USER
The following is an example output:
$ pureadmin create --api-token pureuser
Name API Token Created
pureuser 902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 2014-08-04 14:50:30
To list an existing API token:
$ pureadmin list --api-token --expose USER
The following is an example output:
$ pureadmin list --api-token --expose pureuser
Name API Token Created
pureuser 902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9 2014-08-04 14:50:30
Copy the API token retrieved (902fdca3-7e3f-d2e4-d6a6-24c2285fe1d9
from
the examples above) to use in the next step.
Edit the OpenStack Block Storage service configuration file.
The following sample /etc/cinder/cinder.conf
configuration lists the
relevant settings for a typical Block Storage service using a single
Pure Storage array:
[DEFAULT]
enabled_backends = puredriver-1
default_volume_type = puredriver-1
[puredriver-1]
volume_backend_name = puredriver-1
volume_driver = PURE_VOLUME_DRIVER
san_ip = IP_PURE_MGMT
pure_api_token = PURE_API_TOKEN
use_multipath_for_image_xfer = True
Replace the following variables accordingly:
Use either cinder.volume.drivers.pure.PureISCSIDriver
for iSCSI or
cinder.volume.drivers.pure.PureFCDriver
for Fibre Channel
connectivity.
The IP address of the Pure Storage array’s management interface or a domain name that resolves to that IP address.
The Purity Authorization token that the volume driver uses to perform volume management on the Pure Storage array.
Note
The volume driver automatically creates Purity host objects for
initiators as needed. If CHAP authentication is enabled via the
use_chap_auth
setting, you must ensure there are no manually
created host objects with IQN’s that will be used by the OpenStack
Block Storage service. The driver will only modify credentials on hosts that
it manages.
Note
If using the PureFCDriver it is recommended to use the OpenStack Block Storage Fibre Channel Zone Manager.
To enable auto-eradication of deleted volumes, snapshots, and consistency
groups on deletion, modify the following option in the cinder.conf
file:
pure_eradicate_on_delete = true
By default, auto-eradication is disabled and all deleted volumes, snapshots, and consistency groups are retained on the Pure Storage array in a recoverable state for 24 hours from time of deletion.
To enable SSL certificate validation, modify the following option in the
cinder.conf
file:
driver_ssl_cert_verify = true
By default, SSL certificate validation is disabled.
To specify a non-default path to CA_Bundle
file or directory with
certificates of trusted CAs:
driver_ssl_cert_path = Certificate path
Note
This requires the use of Pure Storage Python SDK > 1.4.0.
Add the following to the back-end specification to specify another Flash Array to replicate to:
[puredriver-1]
replication_device = backend_id:PURE2_NAME,san_ip:IP_PURE2_MGMT,api_token:PURE2_API_TOKEN
Where PURE2_NAME
is the name of the remote Pure Storage system,
IP_PURE2_MGMT
is the management IP address of the remote array,
and PURE2_API_TOKEN
is the Purity Authorization token
of the remote array.
Note that more than one replication_device
line can be added to allow for
multi-target device replication.
A volume is only replicated if the volume is of a volume-type that has
the extra spec replication_enabled
set to <is> True
.
To create a volume type that specifies replication to remote back ends:
$ cinder type-create "ReplicationType"
$ cinder type-key "ReplicationType" set replication_enabled='<is> True'
The following table contains the optional configuration parameters available for replication configuration with the Pure Storage array.
Option | Description | Default |
---|---|---|
pure_replica_interval_default |
Snapshot replication interval in seconds. | 900 |
pure_replica_retention_short_term_default |
Retain all snapshots on target for this time (in seconds). | 14400 |
pure_replica_retention_long_term_per_day_default |
Retain how many snapshots for each day. | 3 |
pure_replica_retention_long_term_default |
Retain snapshots per day on target for this time (in days). | 7 |
Note
replication-failover
is only supported from the primary array to any of the
multiple secondary arrays, but subsequent replication-failover
is only
supported back to the original primary array.
To enable this feature where we calculate the array oversubscription ratio as
(total provisioned/actual used), add the following option in the
cinder.conf
file:
[puredriver-1]
pure_automatic_max_oversubscription_ratio = True
By default, this is disabled and we honor the hard-coded configuration option
max_over_subscription_ratio
.
Note
Arrays with very good data reduction rates (compression/data deduplication/thin provisioning) can get very large oversubscription rates applied.
A large number of metrics are reported by the volume driver which can be useful in implementing more control over volume placement in multi-backend environments using the driver filter and weighter methods.
Metrics reported include, but are not limited to:
total_capacity_gb
free_capacity_gb
provisioned_capacity
total_volumes
total_snapshots
total_hosts
total_pgroups
writes_per_sec
reads_per_sec
input_per_sec
output_per_sec
usec_per_read_op
usec_per_read_op
queue_depth
Note
All total metrics include non-OpenStack managed objects on the array.
In conjunction with QOS extra-specs, you can create very complex algorithms to manage volume placement. More detailed documentation on this is available in other external documentation.
The Quobyte volume driver enables storing Block Storage service volumes on a Quobyte storage back end. Block Storage service back ends are mapped to Quobyte volumes and individual Block Storage service volumes are stored as files on a Quobyte volume. Selection of the appropriate Quobyte volume is done by the aforementioned back end configuration that specifies the Quobyte volume explicitly.
Note
Note the dual use of the term volume
in the context of Block Storage
service volumes and in the context of Quobyte volumes.
For more information see the Quobyte support webpage.
The Quobyte volume driver supports the following volume operations:
Note
When running VM instances off Quobyte volumes, ensure that the Quobyte Compute service driver has been configured in your OpenStack cloud.
To activate the Quobyte volume driver, configure the corresponding
volume_driver
parameter:
volume_driver = cinder.volume.drivers.quobyte.QuobyteDriver
The following table contains the configuration options supported by the Quobyte driver:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
quobyte_client_cfg = None |
(String) Path to a Quobyte Client configuration file. |
quobyte_mount_point_base = $state_path/mnt |
(String) Base dir containing the mount point for the Quobyte volume. |
quobyte_qcow2_volumes = True |
(Boolean) Create volumes as QCOW2 files rather than raw files. |
quobyte_sparsed_volumes = True |
(Boolean) Create volumes as sparse files which take no space. If set to False, volume is created as regular file.In such case volume creation takes a lot of time. |
quobyte_volume_url = None |
(String) URL to the Quobyte volume e.g., quobyte://<DIR host>/<volume name> |
The Scality SOFS volume driver interacts with configured sfused mounts.
The Scality SOFS driver manages volumes as sparse files stored on a
Scality Ring through sfused. Ring connection settings and sfused options
are defined in the cinder.conf
file and the configuration file
pointed to by the scality_sofs_config
option, typically
/etc/sfused.conf
.
The Scality SOFS volume driver provides the following Block Storage volume operations:
Use the following instructions to update the cinder.conf
configuration file:
[DEFAULT]
enabled_backends = scality-1
[scality-1]
volume_driver = cinder.volume.drivers.scality.ScalityDriver
volume_backend_name = scality-1
scality_sofs_config = /etc/sfused.conf
scality_sofs_mount_point = /cinder
scality_sofs_volume_dir = cinder/volumes
Use the following instructions to update the nova.conf
configuration
file:
[libvirt]
scality_sofs_mount_point = /cinder
scality_sofs_config = /etc/sfused.conf
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
scality_sofs_config = None |
(String) Path or URL to Scality SOFS configuration file |
scality_sofs_mount_point = $state_path/scality |
(String) Base dir where Scality SOFS shall be mounted |
scality_sofs_volume_dir = cinder/volumes |
(String) Path from Scality SOFS root to volume dir |
The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster is the ability to set and modify during operation specific QoS levels on a volume for volume basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.
To configure the use of a SolidFire cluster with Block Storage, modify your
cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182 # the address of your MVIP
san_login = sfadmin # your cluster admin login
san_password = sfpassword # your cluster admin password
sf_account_prefix = '' # prefix for tenant account creation on solidfire cluster
Warning
Older versions of the SolidFire driver (prior to Icehouse) created a unique
account prefixed with $cinder-volume-service-hostname-$tenant-id
on the
SolidFire cluster for each tenant. Unfortunately, this account formation
resulted in issues for High Availability (HA) installations and
installations where the cinder-volume
service can move to a new node.
The current default implementation does not experience this issue as no
prefix is used. For installations created on a prior release, the OLD
default behavior can be configured by using the keyword hostname
in
sf_account_prefix.
Note
The SolidFire driver creates names for volumes on the back end using the format UUID-<cinder-id>. This works well, but there is a possibility of a UUID collision for customers running multiple clouds against the same cluster. In Mitaka the ability was added to eliminate the possibility of collisions by introducing the sf_volume_prefix configuration variable. On the SolidFire cluster each volume will be labeled with the prefix, providing the ability to configure unique volume names for each cloud. The default prefix is ‘UUID-‘.
Changing the setting on an existing deployment will result in the existing volumes being inaccessible. To introduce this change to an existing deployment it is recommended to add the Cluster as if it were a second backend and disable new deployments to the current back end.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
sf_account_prefix = None |
(String) Create SolidFire accounts with this prefix. Any string can be used here, but the string “hostname” is special and will create a prefix using the cinder node hostname (previous default behavior). The default is NO prefix. |
sf_allow_template_caching = True |
(Boolean) Create an internal cache of copy of images when a bootable volume is created to eliminate fetch from glance and qemu-conversion on subsequent calls. |
sf_allow_tenant_qos = False |
(Boolean) Allow tenants to specify QOS on create |
sf_api_port = 443 |
(Port number) SolidFire API port. Useful if the device api is behind a proxy on a different port. |
sf_emulate_512 = True |
(Boolean) Set 512 byte emulation on volume creation; |
sf_enable_vag = False |
(Boolean) Utilize volume access groups on a per-tenant basis. |
sf_enable_volume_mapping = True |
(Boolean) Create an internal mapping of volume IDs and account. Optimizes lookups and performance at the expense of memory, very large deployments may want to consider setting to False. |
sf_svip = None |
(String) Overrides default cluster SVIP with the one specified. This is required or deployments that have implemented the use of VLANs for iSCSI networks in their cloud. |
sf_template_account_name = openstack-vtemplate |
(String) Account name on the SolidFire Cluster to use as owner of template/cache volumes (created if does not exist). |
sf_volume_prefix = UUID- |
(String) Create SolidFire volumes with this prefix. Volume names are of the form <sf_volume_prefix><cinder-volume-id>. The default is to use a prefix of ‘UUID-‘. |
QoS support for the SolidFire drivers includes the ability to set the
following capabilities in the OpenStack Block Storage API
cinder.api.contrib.qos_specs_manage
qos specs extension module:
The QoS keys above no longer require to be scoped but must be created and associated to a volume type. For information about how to set the key-value pairs and associate them with a volume type, run the following commands:
$ cinder help qos-create
$ cinder help qos-key
$ cinder help qos-associate
The SynoISCSIDriver
volume driver allows Synology NAS to be used for Block
Storage (cinder) in OpenStack deployments. Information on OpenStack Block
Storage volumes is available in the DSM Storage Manager.
The Synology driver has the following requirements:
Note
The DSM driver is available in the OpenStack Newton release.
Edit the /etc/cinder/cinder.conf
file on your volume driver host.
Synology driver uses a volume in Synology NAS as the back end of Block Storage. Every time you create a new Block Storage volume, the system will create an advanced file LUN in your Synology volume to be used for this new Block Storage volume.
The following example shows how to use different Synology NAS servers as the back end. If you want to use all volumes on your Synology NAS, add another section with the volume number to differentiate between volumes within the same Synology NAS.
[default]
enabled_backends = ds1515pV1, ds1515pV2, rs3017xsV3, others
[ds1515pV1]
# configuration for volume 1 in DS1515+
[ds1515pV2]
# configuration for volume 2 in DS1515+
[rs3017xsV1]
# configuration for volume 1 in RS3017xs
Each section indicates the volume number and the way in which the connection is established. Below is an example of a basic configuration:
[Your_Section_Name]
# Required settings
volume_driver = cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver
iscs_protocol = iscsi
iscsi_ip_address = DS_IP
synology_admin_port = DS_PORT
synology_username = DS_USER
synology_password = DS_PW
synology_pool_name = DS_VOLUME
# Optional settings
volume_backend_name = VOLUME_BACKEND_NAME
iscsi_secondary_ip_addresses = IP_ADDRESSES
driver_use_ssl = True
use_chap_auth = True
chap_username = CHAP_USER_NAME
chap_password = CHAP_PASSWORD
DS_PORT
driver_use_ssl = True
.DS_IP
DS_USER
DS_PW
DS_USER
.DS_VOLUME
volume[0-9]+
, and the number is the same
as the volume number in DSM.Note
If you set driver_use_ssl
as True
, synology_admin_port
must be
an HTTPS port.
The Synology DSM driver supports the following configuration options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
pool_type = default |
(String) Pool type, like sata-2copy. |
synology_admin_port = 5000 |
(Port number) Management port for Synology storage. |
synology_device_id = None |
(String) Device id for skip one time password check for logging in Synology storage if OTP is enabled. |
synology_one_time_pass = None |
(String) One time password of administrator for logging in Synology storage if OTP is enabled. |
synology_password = |
(String) Password of administrator for logging in Synology storage. |
synology_pool_name = |
(String) Volume on Synology storage to be used for creating lun. |
synology_ssl_verify = True |
(Boolean) Do certificate validation or not if $driver_use_ssl is True |
synology_username = admin |
(String) Administrator of Synology storage. |
Tintri VMstore is a smart storage that sees, learns, and adapts for cloud and virtualization. The Tintri Block Storage driver interacts with configured VMstore running Tintri OS 4.0 and above. It supports various operations using Tintri REST APIs and NFS protocol.
To configure the use of a Tintri VMstore with Block Storage, perform the following actions:
Edit the etc/cinder/cinder.conf
file and set the
cinder.volume.drivers.tintri
options:
volume_driver=cinder.volume.drivers.tintri.TintriDriver
# Mount options passed to the nfs client. See section of the
# nfs man page for details. (string value)
nfs_mount_options = vers=3,lookupcache=pos
#
# Options defined in cinder.volume.drivers.tintri
#
# The hostname (or IP address) for the storage system (string
# value)
tintri_server_hostname = {Tintri VMstore Management IP}
# User name for the storage system (string value)
tintri_server_username = {username}
# Password for the storage system (string value)
tintri_server_password = {password}
# API version for the storage system (string value)
# tintri_api_version = v310
# Following options needed for NFS configuration
# File with the list of available nfs shares (string value)
# nfs_shares_config = /etc/cinder/nfs_shares
# Tintri driver will clean up unused image snapshots. With the following
# option, users can configure how long unused image snapshots are
# retained. Default retention policy is 30 days
# tintri_image_cache_expiry_days = 30
# Path to NFS shares file storing images.
# Users can store Glance images in the NFS share of the same VMstore
# mentioned in the following file. These images need to have additional
# metadata ``provider_location`` configured in Glance, which should point
# to the NFS share path of the image.
# This option will enable Tintri driver to directly clone from Glance
# image stored on same VMstore (rather than downloading image
# from Glance)
# tintri_image_shares_config = <Path to image NFS share>
#
# For example:
# Glance image metadata
# provider_location =>
# nfs://<data_ip>/tintri/glance/84829294-c48b-4e16-a878-8b2581efd505
Edit the /etc/nova/nova.conf
file and set the nfs_mount_options
:
nfs_mount_options = vers=3
Edit the /etc/cinder/nfs_shares
file and add the Tintri VMstore mount
points associated with the configured VMstore management IP in the
cinder.conf
file:
{vmstore_data_ip}:/tintri/{submount1}
{vmstore_data_ip}:/tintri/{submount2}
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
tintri_api_version = v310 |
(String) API version for the storage system |
tintri_image_cache_expiry_days = 30 |
(Integer) Delete unused image snapshots older than mentioned days |
tintri_image_shares_config = None |
(String) Path to image nfs shares file |
tintri_server_hostname = None |
(String) The hostname (or IP address) for the storage system |
tintri_server_password = None |
(String) Password for the storage system |
tintri_server_username = None |
(String) User name for the storage system |
The OpenStack V7000 driver package from Violin Memory adds Block Storage service support for Violin 7300 Flash Storage Platforms (FSPs) and 7700 FSP controllers.
The driver package release can be used with any OpenStack Liberty deployment for all 7300 FSPs and 7700 FSP controllers running Concerto 7.5.3 and later using Fibre Channel HBAs.
To use the Violin driver, the following are required:
Violin 7300/7700 series FSP with:
The Violin block storage driver: This driver implements the block storage API calls. The driver is included with the OpenStack Liberty release.
The vmemclient library: This is the Violin Array Communications library to the Flash Storage Platform through a REST-like interface. The client can be installed using the python ‘pip’ installer tool. Further information on vmemclient can be found on PyPI.
pip install vmemclient
Note
Listed operations are supported for thick, thin, and dedup luns, with the exception of cloning. Cloning operations are supported only on thick luns.
Once the array is configured as per the installation guide, it is simply a matter of editing the cinder configuration file to add or modify the parameters. The driver currently only supports fibre channel configuration.
Set the following in your cinder.conf
configuration file, replacing the
variables using the guide in the following section:
volume_driver = cinder.volume.drivers.violin.v7000_fcp.V7000FCPDriver
volume_backend_name = vmem_violinfsp
extra_capabilities = VMEM_CAPABILITIES
san_ip = VMEM_MGMT_IP
san_login = VMEM_USER_NAME
san_password = VMEM_PASSWORD
use_multipath_for_image_xfer = true
Description of configuration value placeholders:
dedup
and thin
. Only these two capabilities are listed here in
cinder.conf
file, indicating this backend be selected for creating
luns which have a volume type associated with them that have dedup
or thin
extra_specs specified. For example, if the FSP is configured
to support dedup luns, set the associated driver capabilities
to: {“dedup”:”True”,”thin”:”True”}.The Virtuozzo Storage driver is a fault-tolerant distributed storage
system that is optimized for virtualization workloads.
Set the following in your cinder.conf
file, and use the following
options to configure it.
volume_driver = cinder.volume.drivers.vzstorage.VZStorageDriver
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
vzstorage_default_volume_format = raw |
(String) Default format that will be used when creating volumes if no volume format is specified. |
vzstorage_mount_options = None |
(List) Mount options passed to the vzstorage client. See section of the pstorage-mount man page for details. |
vzstorage_mount_point_base = $state_path/mnt |
(String) Base dir containing mount points for vzstorage shares. |
vzstorage_shares_config = /etc/cinder/vzstorage_shares |
(String) File with the list of available vzstorage shares. |
vzstorage_sparsed_volumes = True |
(Boolean) Create volumes as sparsed files which take no space rather than regular files when using raw format, in which case volume creation takes lot of time. |
vzstorage_used_ratio = 0.95 |
(Floating point) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination. |
Use the VMware VMDK driver to enable management of the OpenStack Block Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK files on data stores that use any VMware-compatible storage technology such as NFS, iSCSI, FiberChannel, and vSAN.
Note
The VMware VMDK driver requires vCenter version 5.1 at minimum.
The VMware VMDK driver connects to vCenter, through which it can dynamically access all the data stores visible from the ESX hosts in the managed cluster.
When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK file creation completes only when the volume is subsequently attached to an instance. The reason for this requirement is that data stores visible to the instance determine where to place the volume. Before the service creates the VMDK file, attach a volume to the target instance.
The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra disk. Once attached, you can log in to the running vSphere VM to rescan and discover this extra disk.
With the update to ESX version 6.0, the VMDK driver now supports NFS version 4.1.
The recommended volume driver for OpenStack Block Storage is the VMware vCenter VMDK driver. When you configure the driver, you must match it with the appropriate OpenStack Compute driver from VMware and both drivers must point to the same server.
In the nova.conf
file, use this option to define the Compute driver:
compute_driver = vmwareapi.VMwareVCDriver
In the cinder.conf
file, use this option to define the volume
driver:
volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
The following table lists various options that the drivers support for the
OpenStack Block Storage configuration (cinder.conf
):
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
vmware_api_retry_count = 10 |
(Integer) Number of times VMware vCenter server API must be retried upon connection related issues. |
vmware_ca_file = None |
(String) CA bundle file to use in verifying the vCenter server certificate. |
vmware_cluster_name = None |
(Multi-valued) Name of a vCenter compute cluster where volumes should be created. |
vmware_host_ip = None |
(String) IP address for connecting to VMware vCenter server. |
vmware_host_password = None |
(String) Password for authenticating with VMware vCenter server. |
vmware_host_port = 443 |
(Port number) Port number for connecting to VMware vCenter server. |
vmware_host_username = None |
(String) Username for authenticating with VMware vCenter server. |
vmware_host_version = None |
(String) Optional string specifying the VMware vCenter server version. The driver attempts to retrieve the version from VMware vCenter server. Set this configuration only if you want to override the vCenter server version. |
vmware_image_transfer_timeout_secs = 7200 |
(Integer) Timeout in seconds for VMDK volume transfer between Cinder and Glance. |
vmware_insecure = False |
(Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if “vmware_ca_file” is set. |
vmware_max_objects_retrieval = 100 |
(Integer) Max number of objects to be retrieved per batch. Query results will be obtained in batches from the server and not in one shot. Server may still limit the count to something less than the configured value. |
vmware_task_poll_interval = 2.0 |
(Floating point) The interval (in seconds) for polling remote tasks invoked on VMware vCenter server. |
vmware_tmp_dir = /tmp |
(String) Directory where virtual disks are stored during volume backup and restore. |
vmware_volume_folder = Volumes |
(String) Name of the vCenter inventory folder that will contain Cinder volumes. This folder will be created under “OpenStack/<project_folder>”, where project_folder is of format “Project (<volume_project_id>)”. |
vmware_wsdl_location = None |
(String) Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl. Optional over-ride to default location for bug work-arounds. |
The VMware VMDK drivers support the creation of VMDK disk file types thin
,
lazyZeroedThick
(sometimes called thick or flat), or eagerZeroedThick
.
A thin virtual disk is allocated and zeroed on demand as the space is used. Unused space on a Thin disk is available to other users.
A lazy zeroed thick virtual disk will have all space allocated at disk creation. This reserves the entire disk space, so it is not available to other users at any time.
An eager zeroed thick virtual disk is similar to a lazy zeroed thick disk, in that the entire disk is allocated at creation. However, in this type, any previous data will be wiped clean on the disk before the write. This can mean that the disk will take longer to create, but can also prevent issues with stale data on physical media.
Use the vmware:vmdk_type
extra spec key with the appropriate value to
specify the VMDK disk file type. This table shows the mapping between the extra
spec entry and the VMDK disk file type:
Disk file type | Extra spec key | Extra spec value |
---|---|---|
thin | vmware:vmdk_type |
thin |
lazyZeroedThick | vmware:vmdk_type |
thick |
eagerZeroedThick | vmware:vmdk_type |
eagerZeroedThick |
If you do not specify a vmdk_type
extra spec entry, the disk file type will
default to thin
.
The following example shows how to create a lazyZeroedThick
VMDK volume by
using the appropriate vmdk_type
:
$ cinder type-create thick_volume
$ cinder type-key thick_volume set vmware:vmdk_type=thick
$ cinder create --volume-type thick_volume --display-name volume1 1
With the VMware VMDK drivers, you can create a volume from another
source volume or a snapshot point. The VMware vCenter VMDK driver
supports the full
and linked/fast
clone types. Use the
vmware:clone_type
extra spec key to specify the clone type. The
following table captures the mapping for clone types:
Clone type | Extra spec key | Extra spec value |
---|---|---|
full | vmware:clone_type |
full |
linked/fast | vmware:clone_type |
linked |
If you do not specify the clone type, the default is full
.
The following example shows linked cloning from a source volume, which is created from an image:
$ cinder type-create fast_clone
$ cinder type-key fast_clone set vmware:clone_type=linked
$ cinder create --image-id 9cb87f4f-a046-47f5-9b7c-d9487b3c7cd4 \
--volume-type fast_clone --display-name source-vol 1
$ cinder create --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 \
--display-name dest-vol 1
This section describes how to configure back-end data stores using storage
policies. In vCenter 5.5 and greater, you can create one or more storage
policies and expose them as a Block Storage volume-type to a vmdk volume. The
storage policies are exposed to the vmdk driver through the extra spec property
with the vmware:storage_profile
key.
For example, assume a storage policy in vCenter named gold_policy.
and a
Block Storage volume type named vol1
with the extra spec key
vmware:storage_profile
set to the value gold_policy
. Any Block Storage
volume creation that uses the vol1
volume type places the volume only in
data stores that match the gold_policy
storage policy.
The Block Storage back-end configuration for vSphere data stores is
automatically determined based on the vCenter configuration. If you configure a
connection to connect to vCenter version 5.5 or later in the cinder.conf
file, the use of storage policies to configure back-end data stores is
automatically supported.
Note
You must configure any data stores that you configure for the Block Storage service for the Compute service.
To configure back-end data stores by using storage policies
In vCenter, tag the data stores to be used for the back end.
OpenStack also supports policies that are created by using vendor-specific capabilities; for example vSAN-specific storage policies.
Note
The tag value serves as the policy. For details, see Storage policy-based configuration in vCenter.
Set the extra spec key vmware:storage_profile
in the desired Block
Storage volume types to the policy name that you created in the previous
step.
Optionally, for the vmware_host_version
parameter, enter the version
number of your vSphere platform. For example, 5.5
.
This setting overrides the default location for the corresponding WSDL file. Among other scenarios, you can use this setting to prevent WSDL error messages during the development phase or to work with a newer version of vCenter.
Complete the other vCenter configuration parameters as appropriate.
Note
Any volume that is created without an associated policy (that is to say,
without an associated volume type that specifies vmware:storage_profile
extra spec), there is no policy-based placement for that volume.
The VMware vCenter VMDK driver supports these operations:
Create, delete, attach, and detach volumes.
Note
When a volume is attached to an instance, a reconfigure operation is performed on the instance to add the volume’s VMDK to it. The user must manually rescan and mount the device from within the guest operating system.
Create, list, and delete volume snapshots.
Note
Allowed only if volume is not attached to an instance.
Create a volume from a snapshot.
Copy an image to a volume.
Note
Only images in vmdk
disk format with bare
container format are
supported. The vmware_disktype
property of the image can be
preallocated
, sparse
, streamOptimized
or thin
.
Copy a volume to an image.
Note
streamOptimized
disk image.Clone a volume.
Note
Supported only if the source volume is not attached to an instance.
Backup a volume.
Note
This operation creates a backup of the volume in streamOptimized
disk format.
Restore backup to new or existing volume.
Note
Supported only if the existing volume doesn’t contain snapshots.
Change the type of a volume.
Note
This operation is supported only if the volume state is available
.
Extend a volume.
You can configure Storage Policy-Based Management (SPBM) profiles for vCenter data stores supporting the Compute, Image service, and Block Storage components of an OpenStack implementation.
In a vSphere OpenStack deployment, SPBM enables you to delegate several data stores for storage, which reduces the risk of running out of storage space. The policy logic selects the data store based on accessibility and available storage space.
In vCenter, create the tag that identifies the data stores:
spbm-cinder
.Apply the tag to the data stores to be used by the SPBM policy.
Note
For details about creating tags in vSphere, see the vSphere documentation.
In vCenter, create a tag-based storage policy that uses one or more tags to identify a set of data stores.
Note
For details about creating storage policies in vSphere, see the vSphere documentation.
If storage policy is enabled, the driver initially selects all the data stores that match the associated storage policy.
If two or more data stores match the storage policy, the driver chooses a data store that is connected to the maximum number of hosts.
In case of ties, the driver chooses the data store with lowest space
utilization, where space utilization is defined by the
(1-freespace/totalspace)
meters.
These actions reduce the number of volume migrations while attaching the volume to instances.
The volume must be migrated if the ESX host for the instance cannot access the data store that contains the volume.
Windows Server 2012 and Windows Storage Server 2012 offer an integrated iSCSI Target service that can be used with OpenStack Block Storage in your stack. Being entirely a software solution, consider it in particular for mid-sized networks where the costs of a SAN might be excessive.
The Windows Block Storage driver works with OpenStack Compute on any
hypervisor. It includes snapshotting support and the boot from volume
feature.
This driver creates volumes backed by fixed-type VHD images on Windows Server 2012 and dynamic-type VHDX on Windows Server 2012 R2, stored locally on a user-specified path. The system uses those images as iSCSI disks and exports them through iSCSI targets. Each volume has its own iSCSI target.
This driver has been tested with Windows Server 2012 and Windows Server R2 using the Server and Storage Server distributions.
Install the cinder-volume
service as well as the required Python components
directly onto the Windows node.
You may install and configure cinder-volume
and its dependencies manually
using the following guide or you may use the Cinder Volume Installer
,
presented below.
In case you want to avoid all the manual setup, you can use Cloudbase
Solutions’ installer. You can find it at
https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi. It installs an
independent Python environment, in order to avoid conflicts with existing
applications, dynamically generates a cinder.conf
file based on the
parameters provided by you.
cinder-volume
will be configured to run as a Windows Service, which can
be restarted using:
PS C:\> net stop cinder-volume ; net start cinder-volume
The installer can also be used in unattended mode. More details about how to use the installer and its features can be found at https://www.cloudbase.it.
The required service in order to run cinder-volume
on Windows is
wintarget
. This will require the iSCSI Target Server Windows feature
to be installed. You can install it by running the following command:
PS C:\> Add-WindowsFeature
FS-iSCSITarget-ServerAdd-WindowsFeatureFS-iSCSITarget-Server
Note
The Windows Server installation requires at least 16 GB of disk space. The volumes hosted by this node need the extra space.
For cinder-volume
to work properly, you must configure NTP as explained
in Configure NTP.
Next, install the requirements as described in Requirements.
Git can be used to download the necessary source code. The installer to run Git on Windows can be downloaded here:
https://git-for-windows.github.io/
Once installed, run the following to clone the OpenStack Block Storage code:
PS C:\> git.exe clone https://git.openstack.org/openstack/cinder
The cinder.conf
file may be placed in C:\etc\cinder
. Below is a
configuration sample for using the Windows iSCSI Driver:
[DEFAULT]
auth_strategy = keystone
volume_name_template = volume-%s
volume_driver = cinder.volume.drivers.windows.WindowsDriver
glance_api_servers = IP_ADDRESS:9292
rabbit_host = IP_ADDRESS
rabbit_port = 5672
sql_connection = mysql+pymysql://root:Passw0rd@IP_ADDRESS/cinder
windows_iscsi_lun_path = C:\iSCSIVirtualDisks
rabbit_password = Passw0rd
logdir = C:\OpenStack\Log\
image_conversion_dir = C:\ImageConversionDir
debug = True
The following table contains a reference to the only driver specific option that will be used by the Block Storage Windows driver:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
windows_iscsi_lun_path = C:\iSCSIVirtualDisks |
(String) Path to store VHD backed volumes |
After configuring cinder-volume
using the cinder.conf
file, you may
use the following commands to install and run the service (note that you
must replace the variables with the proper paths):
PS C:\> python $CinderClonePath\setup.py install
PS C:\> cmd /c C:\python27\python.exe c:\python27\Scripts\cinder-volume" --config-file $CinderConfPath
The X-IO volume driver for OpenStack Block Storage enables ISE products to be managed by OpenStack Block Storage nodes. This driver can be configured to work with iSCSI and Fibre Channel storage protocols. The X-IO volume driver allows the cloud operator to take advantage of ISE features like Quality of Service (QOS) and Continuous Adaptive Data Placement (CADP). It also supports creating thin volumes and specifying volume media affinity.
ISE FW 2.8.0 or ISE FW 3.1.0 is required for OpenStack Block Storage support. The X-IO volume driver will not work with older ISE FW.
To configure the use of an ISE product with OpenStack Block Storage, modify
your cinder.conf
file as follows. Be careful to use the one that matches
the storage protocol in use:
volume_driver = cinder.volume.drivers.xio.XIOISEFCDriver
san_ip = 1.2.3.4 # the address of your ISE REST management interface
san_login = administrator # your ISE management admin login
san_password = password # your ISE management admin password
volume_driver = cinder.volume.drivers.xio.XIOISEISCSIDriver
san_ip = 1.2.3.4 # the address of your ISE REST management interface
san_login = administrator # your ISE management admin login
san_password = password # your ISE management admin password
iscsi_ip_address = ionet_ip # ip address to one ISE port connected to the IONET
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
driver_use_ssl = False |
(Boolean) Tell driver to use SSL for connection to backend storage if the driver supports it. |
ise_completion_retries = 30 |
(Integer) Number on retries to get completion status after issuing a command to ISE. |
ise_connection_retries = 5 |
(Integer) Number of retries (per port) when establishing connection to ISE management port. |
ise_raid = 1 |
(Integer) Raid level for ISE volumes. |
ise_retry_interval = 1 |
(Integer) Interval (secs) between retries. |
ise_storage_pool = 1 |
(Integer) Default storage pool for volumes. |
The X-IO ISE supports a multipath configuration, but multipath must be enabled on the compute node (see ISE Storage Blade Best Practices Guide). For more information, see X-IO Document Library.
OpenStack Block Storage uses volume types to help the administrator specify attributes for volumes. These attributes are called extra-specs. The X-IO volume driver support the following extra-specs.
Extra-specs name | Valid values | Description |
---|---|---|
Feature:Raid |
1, 5 | RAID level for volume. |
Feature:Pool |
1 - n (n being number of pools on ISE) | Pool to create volume in. |
Affinity:Type |
cadp, flash, hdd | Volume media affinity type. |
Alloc:Type |
0 (thick), 1 (thin) | Allocation type for volume. Thick or thin. |
QoS:minIOPS |
n (value less than maxIOPS) | Minimum IOPS setting for volume. |
QoS:maxIOPS |
n (value bigger than minIOPS) | Maximum IOPS setting for volume. |
QoS:burstIOPS |
n (value bigger than minIOPS) | Burst IOPS setting for volume. |
Create a volume type called xio1-flash for volumes that should reside on ssd storage:
$ cinder type-create xio1-flash
$ cinder type-key xio1-flash set Affinity:Type=flash
Create a volume type called xio1 and set QoS min and max:
$ cinder type-create xio1
$ cinder type-key xio1 set QoS:minIOPS=20
$ cinder type-key xio1 set QoS:maxIOPS=5000
Zadara Storage, Virtual Private Storage Array (VPSA) is the first software defined, Enterprise-Storage-as-a-Service. It is an elastic and private block and file storage system which, provides enterprise-grade data protection and data management storage services.
The ZadaraVPSAISCSIDriver
volume driver allows the Zadara Storage VPSA
to be used as a volume backend storage in OpenStack deployments.
To use Zadara Storage VPSA Volume Driver you will require:
cinder.conf
configuration file to define the volume driver
name along with a storage backend entry for each VPSA pool that will be
managed by the block storage service.
Each backend entry requires a unique section name, surrounded by square
brackets (or parentheses), followed by options in key=value
format.Note
Restart cinder-volume service after modifying cinder.conf
.
Sample minimum backend configuration
[DEFAULT]
enabled_backends = vpsa
[vpsa]
zadara_vpsa_host = 172.31.250.10
zadara_vpsa_port = 80
zadara_user = vpsauser
zadara_password = mysecretpassword
zadara_use_iser = false
zadara_vpsa_poolname = pool-00000001
volume_driver = cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver
volume_backend_name = vpsa
This section contains the configuration options that are specific to the Zadara Storage VPSA driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
zadara_default_snap_policy = False |
(Boolean) VPSA - Attach snapshot policy for volumes |
zadara_password = None |
(String) VPSA - Password |
zadara_use_iser = True |
(Boolean) VPSA - Use ISER instead of iSCSI |
zadara_user = None |
(String) VPSA - Username |
zadara_vol_encrypt = False |
(Boolean) VPSA - Default encryption policy for volumes |
zadara_vol_name_template = OS_%s |
(String) VPSA - Default template for VPSA volume names |
zadara_vpsa_host = None |
(String) VPSA - Management Host name or IP address |
zadara_vpsa_poolname = None |
(String) VPSA - Storage Pool assigned for volumes |
zadara_vpsa_port = None |
(Port number) VPSA - Port number |
zadara_vpsa_use_ssl = False |
(Boolean) VPSA - Use SSL connection |
Note
By design, all volumes created within the VPSA are thin provisioned.
Oracle ZFS Storage Appliances (ZFSSAs) provide advanced software to protect data, speed tuning and troubleshooting, and deliver high performance and high availability. Through the Oracle ZFSSA iSCSI Driver, OpenStack Block Storage can use an Oracle ZFSSA as a block storage resource. The driver enables you to create iSCSI volumes that an OpenStack Block Storage server can allocate to any virtual machine running on a compute host.
The Oracle ZFSSA iSCSI Driver, version 1.0.0
and later, supports
ZFSSA software release 2013.1.2.0
and later.
Enable RESTful service on the ZFSSA Storage Appliance.
Create a new user on the appliance with the following authorizations:
scope=stmf - allow_configure=true
scope=nas - allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true
You can create a role with authorizations as follows:
zfssa:> configuration roles
zfssa:configuration roles> role OpenStackRole
zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack Cinder Driver"
zfssa:configuration roles OpenStackRole (uncommitted)> commit
zfssa:configuration roles> select OpenStackRole
zfssa:configuration roles OpenStackRole> authorizations create
zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=stmf
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
You can create a user with a specific role as follows:
zfssa:> configuration users
zfssa:configuration users> user cinder
zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver"
zfssa:configuration users cinder (uncommitted)> set initial_password=12345
zfssa:configuration users cinder (uncommitted)> commit
zfssa:configuration users> select cinder set roles=OpenStackRole
Note
You can also run this workflow to automate the above tasks.
Ensure that the ZFSSA iSCSI service is online. If the ZFSSA iSCSI service is not online, enable the service by using the BUI, CLI or REST API in the appliance.
zfssa:> configuration services iscsi
zfssa:configuration services iscsi> enable
zfssa:configuration services iscsi> show
Properties:
<status>= online
...
Define the following required properties in the cinder.conf
file:
volume_driver = cinder.volume.drivers.zfssa.zfssaiscsi.ZFSSAISCSIDriver
san_ip = myhost
san_login = username
san_password = password
zfssa_pool = mypool
zfssa_project = myproject
zfssa_initiator_group = default
zfssa_target_portal = w.x.y.z:3260
zfssa_target_interfaces = e1000g0
Optionally, you can define additional properties.
Target interfaces can be seen as follows in the CLI:
zfssa:> configuration net interfaces
zfssa:configuration net interfaces> show
Interfaces:
INTERFACE STATE CLASS LINKS ADDRS LABEL
e1000g0 up ip e1000g0 1.10.20.30/24 Untitled Interface
...
Note
Do not use management interfaces for zfssa_target_interfaces
.
The ZFSSA iSCSI driver supports storage assisted volume migration starting in the Liberty release. This feature uses remote replication feature on the ZFSSA. Volumes can be migrated between two backends configured not only to the same ZFSSA but also between two separate ZFSSAs altogether.
The following conditions must be met in order to use ZFSSA assisted volume migration:
zfssa_replication_ip
in the cinder.conf
file of the source
backend as the IP address used to register the target ZFSSA in the remote
replication service of the source ZFSSA.zfssa_target_group
) on the source and
the destination ZFSSA is the same.If any of the above conditions are not met, the driver will proceed with generic volume migration.
The ZFSSA user on the source and target appliances will need to have
additional role authorizations for assisted volume migration to work. In
scope nas, set allow_rrtarget
and allow_rrsource
to true
.
zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=nas
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrtarget=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rrsource=true
The local cache feature enables ZFSSA drivers to serve the usage of bootable volumes significantly better. With the feature, the first bootable volume created from an image is cached, so that subsequent volumes can be created directly from the cache, instead of having image data transferred over the network multiple times.
The following conditions must be met in order to use ZFSSA local cache feature:
cinder.conf
needs to contain necessary
properties used to configure and set up the ZFSSA iSCSI driver, including the
following new properties:zfssa_enable_local_cache
: (True/False) To enable/disable the feature.zfssa_cache_project
: The ZFSSA project name where cache volumes are
stored.Every cache volume has two additional properties stored as ZFSSA custom schema. It is important that the schema are not altered outside of Block Storage when the driver is in use:
image_id
: stores the image id as in Image service.updated_at
: stores the most current timestamp when the image is updated
in Image service.Extra specs provide the OpenStack storage admin the flexibility to create
volumes with different characteristics from the ones specified in the
cinder.conf
file. The admin will specify the volume properties as keys
at volume type creation. When a user requests a volume of this volume type,
the volume will be created with the properties specified as extra specs.
The following extra specs scoped keys are supported by the driver:
zfssa:volblocksize
zfssa:sparse
zfssa:compression
zfssa:logbias
Volume types can be created using the cinder type-create command. Extra spec keys can be added using cinder type-key command.
The Oracle ZFSSA iSCSI Driver supports these options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
zfssa_initiator = |
(String) iSCSI initiator IQNs. (comma separated) |
zfssa_initiator_config = |
(String) iSCSI initiators configuration. |
zfssa_initiator_group = |
(String) iSCSI initiator group. |
zfssa_initiator_password = |
(String) Secret of the iSCSI initiator CHAP user. |
zfssa_initiator_user = |
(String) iSCSI initiator CHAP user (name). |
zfssa_lun_compression = off |
(String) Data compression. |
zfssa_lun_logbias = latency |
(String) Synchronous write bias. |
zfssa_lun_sparse = False |
(Boolean) Flag to enable sparse (thin-provisioned): True, False. |
zfssa_lun_volblocksize = 8k |
(String) Block size. |
zfssa_pool = None |
(String) Storage pool name. |
zfssa_project = None |
(String) Project name. |
zfssa_replication_ip = |
(String) IP address used for replication data. (maybe the same as data ip) |
zfssa_rest_timeout = None |
(Integer) REST connection timeout. (seconds) |
zfssa_target_group = tgt-grp |
(String) iSCSI target group name. |
zfssa_target_interfaces = None |
(String) Network interfaces of iSCSI targets. (comma separated) |
zfssa_target_password = |
(String) Secret of the iSCSI target CHAP user. |
zfssa_target_portal = None |
(String) iSCSI target portal (Data-IP:Port, w.x.y.z:3260). |
zfssa_target_user = |
(String) iSCSI target CHAP user (name). |
The Oracle ZFS Storage Appliance (ZFSSA) NFS driver enables the ZFSSA to be used seamlessly as a block storage resource. The driver enables you to to create volumes on a ZFS share that is NFS mounted.
Oracle ZFS Storage Appliance Software version 2013.1.2.0
or later.
Appliance configuration using the command-line interface (CLI) is described below. To access the CLI, ensure SSH remote access is enabled, which is the default. You can also perform configuration using the browser user interface (BUI) or the RESTful API. Please refer to the Oracle ZFS Storage Appliance documentation for details on how to configure the Oracle ZFS Storage Appliance using the BUI, CLI, and RESTful API.
Log in to the Oracle ZFS Storage Appliance CLI and enable the REST service. REST service needs to stay online for this driver to function.
zfssa:>configuration services rest enable
Create a new storage pool on the appliance if you do not want to use an
existing one. This storage pool is named 'mypool'
for the sake of this
documentation.
Create a new project and share in the storage pool (mypool
) if you do
not want to use existing ones. This driver will create a project and share
by the names specified in the cinder.conf
file, if a project and share
by that name does not already exist in the storage pool (mypool
).
The project and share are named NFSProject
and nfs_share
‘ in the
sample cinder.conf
file as entries below.
To perform driver operations, create a role with the following authorizations:
scope=svc - allow_administer=true, allow_restart=true, allow_configure=true
scope=nas - pool=pool_name, project=project_name, share=share_name, allow_clone=true, allow_createProject=true, allow_createShare=true, allow_changeSpaceProps=true, allow_changeGeneralProps=true, allow_destroy=true, allow_rollback=true, allow_takeSnap=true
The following examples show how to create a role with authorizations.
zfssa:> configuration roles
zfssa:configuration roles> role OpenStackRole
zfssa:configuration roles OpenStackRole (uncommitted)> set description="OpenStack NFS Cinder Driver"
zfssa:configuration roles OpenStackRole (uncommitted)> commit
zfssa:configuration roles> select OpenStackRole
zfssa:configuration roles OpenStackRole> authorizations create
zfssa:configuration roles OpenStackRole auth (uncommitted)> set scope=svc
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_administer=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_restart=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_configure=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
zfssa:> configuration roles OpenStackRole authorizations> set scope=nas
The following properties need to be set when the scope of this role needs to
be limited to a pool (mypool
), a project (NFSProject
) and a share
(nfs_share
) created in the steps above. This will prevent the user
assigned to this role from being used to modify other pools, projects and
shares.
zfssa:configuration roles OpenStackRole auth (uncommitted)> set pool=mypool
zfssa:configuration roles OpenStackRole auth (uncommitted)> set project=NFSProject
zfssa:configuration roles OpenStackRole auth (uncommitted)> set share=nfs_share
The following properties only need to be set when a share and project has not been created following the steps above and wish to allow the driver to create them for you.
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createProject=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_createShare=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_clone=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_changeSpaceProps=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_destroy=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_rollback=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> set allow_takeSnap=true
zfssa:configuration roles OpenStackRole auth (uncommitted)> commit
Create a new user or modify an existing one and assign the new role to the user.
The following example shows how to create a new user and assign the new role to the user.
zfssa:> configuration users
zfssa:configuration users> user cinder
zfssa:configuration users cinder (uncommitted)> set fullname="OpenStack Cinder Driver"
zfssa:configuration users cinder (uncommitted)> set initial_password=12345
zfssa:configuration users cinder (uncommitted)> commit
zfssa:configuration users> select cinder set roles=OpenStackRole
Ensure that NFS and HTTP services on the appliance are online. Note the
HTTPS port number for later entry in the cinder service configuration file
(cinder.conf
). This driver uses WebDAV over HTTPS to create snapshots
and clones of volumes, and therefore needs to have the HTTP service online.
The following example illustrates enabling the services and showing their properties.
zfssa:> configuration services nfs
zfssa:configuration services nfs> enable
zfssa:configuration services nfs> show
Properties:
<status>= online
...
zfssa:configuration services http> enable
zfssa:configuration services http> show
Properties:
<status>= online
require_login = true
protocols = http/https
listen_port = 80
https_port = 443
Create a network interface to be used exclusively for data. An existing network interface may also be used. The following example illustrates how to make a network interface for data traffic flow only.
Note
For better performance and reliability, it is recommended to configure a separate subnet exclusively for data traffic in your cloud environment.
zfssa:> configuration net interfaces
zfssa:configuration net interfaces> select igbx
zfssa:configuration net interfaces igbx> set admin=false
zfssa:configuration net interfaces igbx> commit
For clustered controller systems, the following verification is required in addition to the above steps. Skip this step if a standalone system is used.
zfssa:> configuration cluster resources list
Verify that both the newly created pool and the network interface are of
type singleton
and are not locked to the current controller. This
approach ensures that the pool and the interface used for data always belong
to the active controller, regardless of the current state of the cluster.
Verify that both the network interface used for management and data, and the
storage pool belong to the same head.
Note
There will be a short service interruption during failback/takeover, but once the process is complete, the driver should be able to access the ZFSSA for data as well as for management.
Define the following required properties in the cinder.conf
configuration file:
volume_driver = cinder.volume.drivers.zfssa.zfssanfs.ZFSSANFSDriver
san_ip = myhost
san_login = username
san_password = password
zfssa_data_ip = mydata
zfssa_nfs_pool = mypool
Note
Management interface san_ip
can be used instead of zfssa_data_ip
,
but it is not recommended.
You can also define the following additional properties in the
cinder.conf
configuration file:
zfssa_nfs_project = NFSProject
zfssa_nfs_share = nfs_share
zfssa_nfs_mount_options =
zfssa_nfs_share_compression = off
zfssa_nfs_share_logbias = latency
zfssa_https_port = 443
Note
The driver does not use the file specified in the nfs_shares_config
option.
The local cache feature enables ZFSSA drivers to serve the usage of bootable volumes significantly better. With the feature, the first bootable volume created from an image is cached, so that subsequent volumes can be created directly from the cache, instead of having image data transferred over the network multiple times.
The following conditions must be met in order to use ZFSSA local cache feature:
A storage pool needs to be configured.
REST and NFS services need to be turned on.
On an OpenStack controller, cinder.conf
needs to contain
necessary properties used to configure and set up the ZFSSA NFS
driver, including the following new properties:
(True/False) To enable/disable the feature.
The directory name inside zfssa_nfs_share where cache volumes are stored.
Every cache volume has two additional properties stored as WebDAV properties. It is important that they are not altered outside of Block Storage when the driver is in use:
The Oracle ZFS Storage Appliance NFS driver supports these options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
zfssa_cache_directory = os-cinder-cache |
(String) Name of directory inside zfssa_nfs_share where cache volumes are stored. |
zfssa_cache_project = os-cinder-cache |
(String) Name of ZFSSA project where cache volumes are stored. |
zfssa_data_ip = None |
(String) Data path IP address |
zfssa_enable_local_cache = True |
(Boolean) Flag to enable local caching: True, False. |
zfssa_https_port = 443 |
(String) HTTPS port number |
zfssa_manage_policy = loose |
(String) Driver policy for volume manage. |
zfssa_nfs_mount_options = |
(String) Options to be passed while mounting share over nfs |
zfssa_nfs_pool = |
(String) Storage pool name. |
zfssa_nfs_project = NFSProject |
(String) Project name. |
zfssa_nfs_share = nfs_share |
(String) Share name. |
zfssa_nfs_share_compression = off |
(String) Data compression. |
zfssa_nfs_share_logbias = latency |
(String) Synchronous write bias-latency, throughput. |
zfssa_rest_timeout = None |
(Integer) REST connection timeout. (seconds) |
This driver shares additional NFS configuration options with the generic NFS driver. For a description of these, see Description of NFS storage configuration options.
The ZTE Cinder drivers allow ZTE KS3200 or KU5200 arrays to be used for Block Storage in OpenStack deployments.
To use the ZTE drivers, the following prerequisites:
Verify that the array can be managed using an HTTPS connection. HTTP can
also be used if zte_api_protocol=http
is placed into the
appropriate sections of the cinder.conf
file.
Confirm that virtual pools A and B are present if you plan to use virtual pools for OpenStack storage.
Edit the cinder.conf
file to define a storage back-end entry for
each storage pool on the array that will be managed by OpenStack. Each
entry consists of a unique section name, surrounded by square brackets,
followed by options specified in key=value
format.
zte_backend_name
value specifies the name of the storage
pool on the array.volume_backend_name
option value can be a unique value, if
you wish to be able to assign volumes to a specific storage pool on
the array, or a name that is shared among multiple storage pools to
let the volume scheduler choose where new volumes are allocated.manage
privileges; and the iSCSI IP
addresses for the array if using the iSCSI transport protocol.In the examples below, two back ends are defined, one for pool A and one
for pool B, and a common volume_backend_name
. Use this for a
single volume type definition can be used to allocate volumes from both
pools.
Example: iSCSI back-end entries
[pool-a]
zte_backend_name = A
volume_backend_name = zte-array
volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
zte_iscsi_ips = 10.2.3.4,10.2.3.5
[pool-b]
zte_backend_name = B
volume_backend_name = zte-array
volume_driver = cinder.volume.drivers.zte.zte_iscsi.ZTEISCSIDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
zte_iscsi_ips = 10.2.3.4,10.2.3.5
Example: Fibre Channel back end entries
[pool-a]
zte_backend_name = A
volume_backend_name = zte-array
volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
[pool-b]
zte_backend_name = B
volume_backend_name = zte-array
volume_driver = cinder.volume.drivers.zte.zte_fc.ZTEFCDriver
san_ip = 10.1.2.3
san_login = manage
san_password = !manage
If HTTPS is not enabled in the array, include
zte_api_protocol = http
in each of the back-end definitions.
If HTTPS is enabled, you can enable certificate verification with the
option zte_verify_certificate=True
. You may also use the
zte_verify_certificate_path
parameter to specify the path to a
CA_BUNDLE
file containing CAs other than those in the default list.
Modify the [DEFAULT]
section of the cinder.conf
file to add an
enabled_backends
parameter specifying the back-end entries you added,
and a default_volume_type
parameter specifying the name of a volume
type that you will create in the next step.
Example: [DEFAULT] section changes
[DEFAULT]
...
enabled_backends = pool-a,pool-b
default_volume_type = zte
...
Create a new volume type for each distinct volume_backend_name
value
that you added to the cinder.conf
file. The example below
assumes that the same volume_backend_name=zte-array
option was specified in all of the
entries, and specifies that the volume type zte
can be used to
allocate volumes from any of them.
Example: Creating a volume type
$ cinder type-create zte
$ cinder type-key zte set volume_backend_name=zte-array
After modifying the cinder.conf
file,
restart the cinder-volume
service.
The following table contains the configuration options that are specific to the ZTE drivers.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
zteAheadReadSize = 8 |
(Integer) Cache readahead size. |
zteCachePolicy = 1 |
(Integer) Cache policy. 0, Write Back; 1, Write Through. |
zteChunkSize = 4 |
(Integer) Virtual block size of pool. Unit : KB. Valid value : 4, 8, 16, 32, 64, 128, 256, 512. |
zteControllerIP0 = None |
(IP) Main controller IP. |
zteControllerIP1 = None |
(IP) Slave controller IP. |
zteLocalIP = None |
(IP) Local IP. |
ztePoolVoAllocatedPolicy = 0 |
(Integer) Pool volume allocated policy. 0, Auto; 1, High Performance Tier First; 2, Performance Tier First; 3, Capacity Tier First. |
ztePoolVolAlarmStopAllocatedFlag = 0 |
(Integer) Pool volume alarm stop allocated flag. |
ztePoolVolAlarmThreshold = 0 |
(Integer) Pool volume alarm threshold. [0, 100] |
ztePoolVolInitAllocatedCapacity = 0 |
(Integer) Pool volume init allocated Capacity.Unit : KB. |
ztePoolVolIsThin = False |
(Integer) Whether it is a thin volume. |
ztePoolVolMovePolicy = 0 |
(Integer) Pool volume move policy.0, Auto; 1, Highest Available; 2, Lowest Available; 3, No Relocation. |
zteSSDCacheSwitch = 1 |
(Integer) SSD cache switch. 0, OFF; 1, ON. |
zteStoragePool = |
(List) Pool name list. |
zteUserName = None |
(String) User name. |
zteUserPassword = None |
(String) User password. |
To use different volume drivers for the cinder-volume service, use the parameters described in these sections.
The volume drivers are included in the Block Storage repository. To set a volume
driver, use the volume_driver
flag. The default is:
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
The Ceph backup driver backs up volumes of any type to a Ceph back-end store. The driver can also detect whether the volume to be backed up is a Ceph RBD volume, and if so, it tries to perform incremental and differential backups.
For source Ceph RBD volumes, you can perform backups within the same Ceph pool (not recommended). You can also perform backups between different Ceph pools and between different Ceph clusters.
At the time of writing, differential backup support in Ceph/librbd was quite new. This driver attempts a differential backup in the first instance. If the differential backup fails, the driver falls back to full backup/copy.
If incremental backups are used, multiple backups of the same volume are stored as snapshots so that minimal space is consumed in the backup store. It takes far less time to restore a volume than to take a full copy.
Note
Block Storage enables you to:
To enable the Ceph backup driver, include the following option in the
cinder.conf
file:
backup_driver = cinder.backup.drivers.ceph
The following configuration options are available for the Ceph backup driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_ceph_chunk_size = 134217728 |
(Integer) The chunk size, in bytes, that a backup is broken into before transfer to the Ceph object store. |
backup_ceph_conf = /etc/ceph/ceph.conf |
(String) Ceph configuration file to use. |
backup_ceph_pool = backups |
(String) The Ceph pool where volume backups are stored. |
backup_ceph_stripe_count = 0 |
(Integer) RBD stripe count to use when creating a backup image. |
backup_ceph_stripe_unit = 0 |
(Integer) RBD stripe unit to use when creating a backup image. |
backup_ceph_user = cinder |
(String) The Ceph user to connect with. Default here is to use the same user as for Cinder volumes. If not using cephx this should be set to None. |
restore_discard_excess_bytes = True |
(Boolean) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes. |
This example shows the default options for the Ceph backup driver.
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
The GlusterFS backup driver backs up volumes of any type to GlusterFS.
To enable the GlusterFS backup driver, include the following option in the
cinder.conf
file:
backup_driver = cinder.backup.drivers.glusterfs
The following configuration options are available for the GlusterFS backup driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
glusterfs_backup_mount_point = $state_path/backup_mount |
(String) Base dir containing mount point for gluster share. |
glusterfs_backup_share = None |
(String) GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format. Eg: 1.2.3.4:backup_vol |
The backup driver for the NFS back end backs up volumes of any type to an NFS exported backup repository.
To enable the NFS backup driver, include the following option in the
[DEFAULT]
section of the cinder.conf
file:
backup_driver = cinder.backup.drivers.nfs
The following configuration options are available for the NFS back-end backup driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_container = None |
(String) Custom directory to use for backups. |
backup_enable_progress_timer = True |
(Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer. |
backup_file_size = 1999994880 |
(Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes. |
backup_mount_options = None |
(String) Mount options passed to the NFS client. See NFS man page for details. |
backup_mount_point_base = $state_path/backup_mount |
(String) Base dir containing mount point for NFS share. |
backup_sha_block_size_bytes = 32768 |
(Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes. |
backup_share = None |
(String) NFS share in hostname:path, ipv4addr:path, or “[ipv6addr]:path” format. |
The POSIX file systems backup driver backs up volumes of any type to POSIX file systems.
To enable the POSIX file systems backup driver, include the following
option in the cinder.conf
file:
backup_driver = cinder.backup.drivers.posix
The following configuration options are available for the POSIX file systems backup driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_container = None |
(String) Custom directory to use for backups. |
backup_enable_progress_timer = True |
(Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the backend storage. The default value is True to enable the timer. |
backup_file_size = 1999994880 |
(Integer) The maximum size in bytes of the files used to hold backups. If the volume being backed up exceeds this size, then it will be backed up into multiple files.backup_file_size must be a multiple of backup_sha_block_size_bytes. |
backup_posix_path = $state_path/backup |
(String) Path specifying where to store backups. |
backup_sha_block_size_bytes = 32768 |
(Integer) The size in bytes that changes are tracked for incremental backups. backup_file_size has to be multiple of backup_sha_block_size_bytes. |
The backup driver for the swift back end performs a volume backup to an object storage system.
To enable the swift backup driver, include the following option in the
cinder.conf
file:
backup_driver = cinder.backup.drivers.swift
The following configuration options are available for the Swift back-end backup driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_swift_auth = per_user |
(String) Swift authentication mechanism |
backup_swift_auth_version = 1 |
(String) Swift authentication version. Specify “1” for auth 1.0, or “2” for auth 2.0 or “3” for auth 3.0 |
backup_swift_block_size = 32768 |
(Integer) The size in bytes that changes are tracked for incremental backups. backup_swift_object_size has to be multiple of backup_swift_block_size. |
backup_swift_ca_cert_file = None |
(String) Location of the CA certificate file to use for swift client requests. |
backup_swift_container = volumebackups |
(String) The default Swift container to use |
backup_swift_enable_progress_timer = True |
(Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the Swift backend storage. The default value is True to enable the timer. |
backup_swift_key = None |
(String) Swift key for authentication |
backup_swift_object_size = 52428800 |
(Integer) The size in bytes of Swift backup objects |
backup_swift_project = None |
(String) Swift project/account name. Required when connecting to an auth 3.0 system |
backup_swift_project_domain = None |
(String) Swift project domain name. Required when connecting to an auth 3.0 system |
backup_swift_retry_attempts = 3 |
(Integer) The number of retries to make for Swift operations |
backup_swift_retry_backoff = 2 |
(Integer) The backoff time in seconds between Swift retries |
backup_swift_tenant = None |
(String) Swift tenant/account name. Required when connecting to an auth 2.0 system |
backup_swift_url = None |
(String) The URL of the Swift endpoint |
backup_swift_user = None |
(String) Swift user name |
backup_swift_user_domain = None |
(String) Swift user domain name. Required when connecting to an auth 3.0 system |
keystone_catalog_info = identity:Identity Service:publicURL |
(String) Info to match when looking for keystone in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_auth_url is unset |
swift_catalog_info = object-store:swift:publicURL |
(String) Info to match when looking for swift in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if backup_swift_url is unset |
To enable the swift backup driver for 1.0, 2.0, or 3.0 authentication version,
specify 1
, 2
, or 3
correspondingly. For example:
backup_swift_auth_version = 2
In addition, the 2.0 authentication system requires the definition of the
backup_swift_tenant
setting:
backup_swift_tenant = <None>
This example shows the default options for the Swift back-end backup driver.
backup_swift_url = http://localhost:8080/v1/AUTH_
backup_swift_auth_url = http://localhost:5000/v3
backup_swift_auth = per_user
backup_swift_auth_version = 1
backup_swift_user = <None>
backup_swift_user_domain = <None>
backup_swift_key = <None>
backup_swift_container = volumebackups
backup_swift_object_size = 52428800
backup_swift_project = <None>
backup_swift_project_domain = <None>
backup_swift_retry_attempts = 3
backup_swift_retry_backoff = 2
backup_compression_algorithm = zlib
The Google Cloud Storage (GCS) backup driver backs up volumes of any type to Google Cloud Storage.
To enable the GCS backup driver, include the following option in the
cinder.conf
file:
backup_driver = cinder.backup.drivers.google
The following configuration options are available for the GCS backup driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_gcs_block_size = 32768 |
(Integer) The size in bytes that changes are tracked for incremental backups. backup_gcs_object_size has to be multiple of backup_gcs_block_size. |
backup_gcs_bucket = None |
(String) The GCS bucket to use. |
backup_gcs_bucket_location = US |
(String) Location of GCS bucket. |
backup_gcs_credential_file = None |
(String) Absolute path of GCS service account credential file. |
backup_gcs_enable_progress_timer = True |
(Boolean) Enable or Disable the timer to send the periodic progress notifications to Ceilometer when backing up the volume to the GCS backend storage. The default value is True to enable the timer. |
backup_gcs_num_retries = 3 |
(Integer) Number of times to retry. |
backup_gcs_object_size = 52428800 |
(Integer) The size in bytes of GCS backup objects. |
backup_gcs_project_id = None |
(String) Owner project id for GCS bucket. |
backup_gcs_proxy_url = None |
(URI) URL for http proxy access. |
backup_gcs_reader_chunk_size = 2097152 |
(Integer) GCS object will be downloaded in chunks of bytes. |
backup_gcs_retry_error_codes = 429 |
(List) List of GCS error codes. |
backup_gcs_storage_class = NEARLINE |
(String) Storage class of GCS bucket. |
backup_gcs_user_agent = gcscinder |
(String) Http user-agent string for gcs api. |
backup_gcs_writer_chunk_size = 2097152 |
(Integer) GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the file is to be uploaded as a single chunk. |
The IBM Tivoli Storage Manager (TSM) backup driver enables performing volume backups to a TSM server.
The TSM client should be installed and configured on the machine running the cinder-backup service. See the IBM Tivoli Storage Manager Backup-Archive Client Installation and User’s Guide for details on installing the TSM client.
To enable the IBM TSM backup driver, include the following option in
cinder.conf
:
backup_driver = cinder.backup.drivers.tsm
The following configuration options are available for the TSM backup driver.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_tsm_compression = True |
(Boolean) Enable or Disable compression for backups |
backup_tsm_password = password |
(String) TSM password for the running username |
backup_tsm_volume_prefix = backup |
(String) Volume prefix for the backup id when backing up to TSM |
This example shows the default options for the TSM backup driver.
backup_tsm_volume_prefix = backup
backup_tsm_password = password
backup_tsm_compression = True
This section describes how to configure the cinder-backup service and its drivers.
The volume drivers are included with the Block Storage repository. To set a backup
driver, use the backup_driver
flag. By default there is no backup
driver enabled.
Block Storage service uses the cinder-scheduler
service
to determine how to dispatch block storage requests.
For more information, see Cinder Scheduler Filters and Cinder Scheduler Weights.
The corresponding log file of each Block Storage service is stored in
the /var/log/cinder/
directory of the host on which each service
runs.
Log file | Service/interface (for CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise) | Service/interface (for Ubuntu and Debian) |
---|---|---|
api.log | openstack-cinder-api | cinder-api |
cinder-manage.log | cinder-manage | cinder-manage |
scheduler.log | openstack-cinder-scheduler | cinder-scheduler |
volume.log | openstack-cinder-volume | cinder-volume |
The Fibre Channel Zone Manager allows FC SAN Zone/Access control management in conjunction with Fibre Channel block storage. The configuration of Fibre Channel Zone Manager and various zone drivers are described in this section.
If Block Storage is configured to use a Fibre Channel volume driver that
supports Zone Manager, update cinder.conf
to add the following
configuration options to enable Fibre Channel Zone Manager.
Make the following changes in the /etc/cinder/cinder.conf
file.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
zoning_mode = None |
(String) FC Zoning mode configured |
[fc-zone-manager] | |
enable_unsupported_driver = False |
(Boolean) Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release. |
fc_fabric_names = None |
(String) Comma separated list of Fibre Channel fabric names. This list of names is used to retrieve other SAN credentials for connecting to each SAN fabric |
fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService |
(String) FC SAN Lookup Service |
zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver |
(String) FC Zone Driver responsible for zone management |
zoning_policy = initiator-target |
(String) Zoning policy configured by user; valid values include “initiator-target” or “initiator” |
To use different Fibre Channel Zone Drivers, use the parameters described in this section.
Note
When multi backend configuration is used, provide the
zoning_mode
configuration option as part of the volume driver
configuration where volume_driver
option is specified.
Note
Default value of zoning_mode
is None
and this needs to be
changed to fabric
to allow fabric zoning.
Note
zoning_policy
can be configured as initiator-target
or
initiator
Brocade Fibre Channel Zone Driver performs zoning operations through HTTP, HTTPS, or SSH.
Set the following options in the cinder.conf
configuration file.
Configuration option = Default value | Description |
---|---|
[fc-zone-manager] | |
brcd_sb_connector = HTTP |
(String) South bound connector for zoning operation |
Configure SAN fabric parameters in the form of fabric groups as described in the example below:
Configuration option = Default value | Description |
---|---|
[BRCD_FABRIC_EXAMPLE] | |
fc_fabric_address = |
(String) Management IP of fabric. |
fc_fabric_password = |
(String) Password for user. |
fc_fabric_port = 22 |
(Port number) Connecting port |
fc_fabric_ssh_cert_path = |
(String) Local SSH certificate Path. |
fc_fabric_user = |
(String) Fabric user ID. |
fc_southbound_protocol = HTTP |
(String) South bound connector for the fabric. |
fc_virtual_fabric_id = None |
(String) Virtual Fabric ID. |
principal_switch_wwn = None |
(String) DEPRECATED: Principal switch WWN of the fabric. This option is not used anymore. |
zone_activate = True |
(Boolean) Overridden zoning activation state. |
zone_name_prefix = openstack |
(String) Overridden zone name prefix. |
zoning_policy = initiator-target |
(String) Overridden zoning policy. |
Note
Define a fabric group for each fabric using the fabric names used in
fc_fabric_names
configuration option as group name.
Note
To define a fabric group for a switch which has Virtual Fabrics
enabled, include the fc_virtual_fabric_id
configuration option
and fc_southbound_protocol
configuration option set to HTTP
or HTTPS
in the fabric group. Zoning on VF enabled fabric using
SSH
southbound protocol is not supported.
Brocade Fibre Channel Zone Driver requires firmware version FOS v6.4 or higher.
As a best practice for zone management, use a user account with
zoneadmin
role. Users with admin
role (including the default
admin
user account) are limited to a maximum of two concurrent SSH
sessions.
For information about how to manage Brocade Fibre Channel switches, see the Brocade Fabric OS user documentation.
Cisco Fibre Channel Zone Driver automates the zoning operations through SSH. Configure Cisco Zone Driver, Cisco Southbound connector, FC SAN lookup service and Fabric name.
Set the following options in the cinder.conf
configuration file.
[fc-zone-manager]
zone_driver = cinder.zonemanager.drivers.cisco.cisco_fc_zone_driver.CiscoFCZoneDriver
fc_san_lookup_service = cinder.zonemanager.drivers.cisco.cisco_fc_san_lookup_service.CiscoFCSanLookupService
fc_fabric_names = CISCO_FABRIC_EXAMPLE
cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
Configuration option = Default value | Description |
---|---|
[fc-zone-manager] | |
cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI |
(String) Southbound connector for zoning operation |
Configure SAN fabric parameters in the form of fabric groups as described in the example below:
Configuration option = Default value | Description |
---|---|
[CISCO_FABRIC_EXAMPLE] | |
cisco_fc_fabric_address = |
(String) Management IP of fabric |
cisco_fc_fabric_password = |
(String) Password for user |
cisco_fc_fabric_port = 22 |
(Port number) Connecting port |
cisco_fc_fabric_user = |
(String) Fabric user ID |
cisco_zone_activate = True |
(Boolean) overridden zoning activation state |
cisco_zone_name_prefix = None |
(String) overridden zone name prefix |
cisco_zoning_policy = initiator-target |
(String) overridden zoning policy |
cisco_zoning_vsan = None |
(String) VSAN of the Fabric |
Note
Define a fabric group for each fabric using the fabric names used in
fc_fabric_names
configuration option as group name.
The Cisco Fibre Channel Zone Driver supports basic and enhanced
zoning modes.The zoning VSAN must exist with an active zone set name
which is same as the fc_fabric_names
option.
Cisco MDS 9000 Family Switches.
Cisco MDS NX-OS Release 6.2(9) or later.
For information about how to manage Cisco Fibre Channel switches, see the Cisco MDS 9000 user documentation.
Nested quota is a change in how OpenStack services (such as Block Storage and Compute) handle their quota resources by being hierarchy-aware. The main reason for this change is to fully appreciate the hierarchical multi-tenancy concept, which was introduced in keystone in the Kilo release.
Once you have a project hierarchy created in keystone, nested quotas let you define how much of a project’s quota you want to give to its subprojects. In that way, hierarchical projects can have hierarchical quotas (also known as nested quotas).
Projects and subprojects have similar behaviors, but they differ from each other when it comes to default quota values. The default quota value for resources in a subproject is 0, so that when a subproject is created it will not consume all of its parent’s quota.
In order to keep track of how much of each quota was allocated to a
subproject, a column allocated
was added to the quotas table. This column
is updated after every delete and update quota operation.
This example shows you how to use nested quotas.
Note
Assume that you have created a project hierarchy in keystone, such as follows:
+-----------+
| |
| A |
| / \ |
| B C |
| / |
| D |
+-----------+
Get the quota for root projects.
Use the cinder quota-show command and specify:
The TENANT_ID
of the relevant project. In this case, the id of
project A.
$ cinder quota-show TENANT_ID
+-----------------------+-------+
| Property | Value |
+-----------------------+-------+
| backup_gigabytes | 1000 |
| backups | 10 |
| gigabytes | 1000 |
| gigabytes_lvmdriver-1 | -1 |
| per_volume_gigabytes | -1 |
| snapshots | 10 |
| snapshots_lvmdriver-1 | -1 |
| volumes | 10 |
| volumes_lvmdriver-1 | -1 |
+-----------------------+-------+
Note
This command returns the default values for resources. This is because the quotas for this project were not explicitly set.
Get the quota for subprojects.
In this case, use the same quota-show command and specify:
The TENANT_ID
of the relevant project. In this case the id of
project B, which is a child of A.
$ cinder quota-show TENANT_ID
+-----------------------+-------+
| Property | Value |
+-----------------------+-------+
| backup_gigabytes | 0 |
| backups | 0 |
| gigabytes | 0 |
| gigabytes_lvmdriver-1 | 0 |
| per_volume_gigabytes | 0 |
| snapshots | 0 |
| snapshots_lvmdriver-1 | 0 |
| volumes | 0 |
| volumes_lvmdriver-1 | 0 |
+-----------------------+-------+
Note
In this case, 0 was the value returned as the quota for all the resources. This is because project B is a subproject of A, thus, the default quota value is 0, so that it will not consume all the quota of its parent project.
Now that the projects were created, assume that the admin of project B wants to use it. First of all, you need to set the quota limit of the project, because as a subproject it does not have quotas allocated by default.
In this example, when all of the parent project is allocated to its subprojects the user will not be able to create more resources in the parent project.
Update the quota of B.
Use the quota-update command and specify:
The TENANT_ID
of the relevant project.
In this case the id of project B.
The --volumes
option, followed by the number to which you wish to
increase the volumes quota.
$ cinder quota-update TENANT_ID --volumes 10
+-----------------------+-------+
| Property | Value |
+-----------------------+-------+
| backup_gigabytes | 0 |
| backups | 0 |
| gigabytes | 0 |
| gigabytes_lvmdriver-1 | 0 |
| per_volume_gigabytes | 0 |
| snapshots | 0 |
| snapshots_lvmdriver-1 | 0 |
| volumes | 10 |
| volumes_lvmdriver-1 | 0 |
+-----------------------+-------+
Note
The volumes resource quota is updated.
Try to create a volume in project A.
Use the create command and specify:
The SIZE
of the volume that will be created;
The NAME
of the volume.
$ cinder create --size SIZE NAME
VolumeLimitExceeded: Maximum number of volumes allowed (10) exceeded for quota 'volumes'. (HTTP 413) (Request-ID: req-f6f7cc89-998e-4a82-803d-c73c8ee2016c)
Note
As the entirety of project A’s volumes quota has been assigned to project B, it is treated as if all of the quota has been used. This is true even when project B has not created any volumes.
See cinder nested quota spec and hierarchical multi-tenancy spec for details.
We recommend the Key management service (barbican) for storing
encryption keys used by the OpenStack volume encryption feature. It can
be enabled by updating cinder.conf
and nova.conf
.
Configuration changes need to be made to any nodes running the
cinder-api
or nova-compute
server.
Steps to update cinder-api
servers:
Edit the /etc/cinder/cinder.conf
file to use Key management service
as follows:
Look for the [key_manager]
section.
Enter a new line directly below [key_manager]
with the following:
api_class = cinder.key_manager.barbican.BarbicanKeyManager
Note
Use a ‘#’ prefix to comment out the line in this section that begins with ‘fixed_key’.
Restart cinder-api
.
Update nova-compute
servers:
Install the cryptsetup
utility and the python-barbicanclient
Python package.
Set up the Key Manager service by editing /etc/nova/nova.conf
:
[key_manager]
api_class = nova.key_manager.barbican.BarbicanKeyManager
Restart nova-compute
.
Block Storage volume type assignment provides scheduling to a specific back-end, and can be used to specify actionable information for a back-end storage device.
This example creates a volume type called LUKS and provides configuration information for the storage system to encrypt or decrypt the volume.
Source your admin credentials:
$ . admin-openrc.sh
Create the volume type:
$ cinder type-create LUKS
+--------------------------------------+-------+
| ID | Name |
+--------------------------------------+-------+
| e64b35a4-a849-4c53-9cc7-2345d3c8fbde | LUKS |
+--------------------------------------+-------+
Mark the volume type as encrypted and provide the necessary details. Use
--control_location
to specify where encryption is performed:
front-end
(default) or back-end
.
$ cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 \
--control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
+--------------------------------------+-------------------------------------------+-----------------+----------+------------------+
| Volume Type ID | Provider | Cipher | Key Size | Control Location |
+--------------------------------------+-------------------------------------------+-----------------+----------+------------------+
| e64b35a4-a849-4c53-9cc7-2345d3c8fbde | nova.volume.encryptors.luks.LuksEncryptor | aes-xts-plain64 | 512 | front-end |
+--------------------------------------+-------------------------------------------+-----------------+----------+------------------+
The OpenStack dashboard (horizon) supports creating the encrypted volume type as of the Kilo release. For instructions, see Create an encrypted volume type.
Use the OpenStack dashboard (horizon), or the cinder
command to create volumes just as you normally would. For an encrypted volume,
pass the --volume-type LUKS
flag, which denotes that the volume will be of
encrypted type LUKS
. If that argument is left out, the default volume
type, unencrypted
, is used.
Source your admin credentials:
$ . admin-openrc.sh
Create an unencrypted 1 GB test volume:
$ cinder create --display-name 'unencrypted volume' 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-08-10T01:24:03.000000 |
| description | None |
| encrypted | False |
| id | 081700fd-2357-44ff-860d-2cd78ad9c568 |
| metadata | {} |
| name | unencrypted volume |
| os-vol-host-attr:host | controller |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 08fdea76c760475f82087a45dbe94918 |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 7cbc6b58b372439e8f70e2a9103f1332 |
| volume_type | None |
+--------------------------------+--------------------------------------+
Create an encrypted 1 GB test volume:
$ cinder create --display-name 'encrypted volume' --volume-type LUKS 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-08-10T01:24:24.000000 |
| description | None |
| encrypted | True |
| id | 86060306-6f43-4c92-9ab8-ddcd83acd973 |
| metadata | {} |
| name | encrypted volume |
| os-vol-host-attr:host | controller |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 08fdea76c760475f82087a45dbe94918 |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 7cbc6b58b372439e8f70e2a9103f1332 |
| volume_type | LUKS |
+--------------------------------+--------------------------------------+
Notice the encrypted parameter; it will show True
or False
.
The option volume_type
is also shown for easy review.
Note
Due to the issue that some of the volume drivers do not set
encrypted
flag, attaching of encrypted volumes to a virtual
guest will fail, because OpenStack Compute service will not run
encryption providers.
This is a simple test scenario to help validate your encryption. It assumes an LVM based Block Storage server.
Perform these steps after completing the volume encryption setup and creating the volume-type for LUKS as described in the preceding sections.
Create a VM:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-disk vm-test
Create two volumes, one encrypted and one not encrypted then attach them to your VM:
$ cinder create --display-name 'unencrypted volume' 1
$ cinder create --display-name 'encrypted volume' --volume-type LUKS 1
$ cinder list
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
| 64b48a79-5686-4542-9b52-d649b51c10a2 | available | unencrypted volume | 1 | None | false | |
| db50b71c-bf97-47cb-a5cf-b4b43a0edab6 | available | encrypted volume | 1 | LUKS | false | |
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
$ nova volume-attach vm-test 64b48a79-5686-4542-9b52-d649b51c10a2 /dev/vdb
$ nova volume-attach vm-test db50b71c-bf97-47cb-a5cf-b4b43a0edab6 /dev/vdc
On the VM, send some text to the newly attached volumes and synchronize them:
# echo "Hello, world (unencrypted /dev/vdb)" >> /dev/vdb
# echo "Hello, world (encrypted /dev/vdc)" >> /dev/vdc
# sync && sleep 2
# sync && sleep 2
On the system hosting cinder volume services, synchronize to flush the I/O cache then test to see if your strings can be found:
# sync && sleep 2
# sync && sleep 2
# strings /dev/stack-volumes/volume-* | grep "Hello"
Hello, world (unencrypted /dev/vdb)
In the above example you see that the search returns the string written to the unencrypted volume, but not the encrypted one.
These options can also be set in the cinder.conf
file.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
api_rate_limit = True |
(Boolean) Enables or disables rate limit of the API. |
az_cache_duration = 3600 |
(Integer) Cache volume availability zones in memory for the provided duration in seconds |
backend_host = None |
(String) Backend override of host value. |
default_timeout = 31536000 |
(Integer) Default timeout for CLI operations in minutes. For example, LUN migration is a typical long running operation, which depends on the LUN size and the load of the array. An upper bound in the specific deployment can be set to avoid unnecessary long wait. By default, it is 365 days long. |
enable_v1_api = True |
(Boolean) DEPRECATED: Deploy v1 of the Cinder API. |
enable_v2_api = True |
(Boolean) DEPRECATED: Deploy v2 of the Cinder API. |
enable_v3_api = True |
(Boolean) Deploy v3 of the Cinder API. |
extra_capabilities = {} |
(String) User defined capabilities, a JSON formatted string specifying key/value pairs. The key/value pairs can be used by the CapabilitiesFilter to select between backends when requests specify volume types. For example, specifying a service level or the geographical location of a backend, then creating a volume type to allow the user to select by these different properties. |
ignore_pool_full_threshold = False |
(Boolean) Force LUN creation even if the full threshold of pool is reached. By default, the value is False. |
management_ips = |
(String) List of Management IP addresses (separated by commas) |
message_ttl = 2592000 |
(Integer) message minimum life in seconds. |
osapi_max_limit = 1000 |
(Integer) The maximum number of items that a collection resource returns in a single response |
osapi_max_request_body_size = 114688 |
(Integer) Max size for body of a request |
osapi_volume_base_URL = None |
(String) Base URL that will be presented to users in links to the OpenStack Volume API |
osapi_volume_ext_list = |
(List) Specify list of extensions to load when using osapi_volume_extension option with cinder.api.contrib.select_extensions |
osapi_volume_extension = ['cinder.api.contrib.standard_extensions'] |
(Multi-valued) osapi volume extension to load |
osapi_volume_listen = 0.0.0.0 |
(String) IP address on which OpenStack Volume API listens |
osapi_volume_listen_port = 8776 |
(Port number) Port on which OpenStack Volume API listens |
osapi_volume_use_ssl = False |
(Boolean) Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified. |
osapi_volume_workers = None |
(Integer) Number of workers for OpenStack Volume API service. The default is equal to the number of CPUs available. |
per_volume_size_limit = -1 |
(Integer) Max size allowed per volume, in gigabytes |
public_endpoint = None |
(String) Public url to use for versions endpoint. The default is None, which will use the request’s host_url attribute to populate the URL base. If Cinder is operating behind a proxy, you will want to change this to represent the proxy’s URL. |
query_volume_filters = name, status, metadata, availability_zone, bootable, group_id |
(List) Volume filter options which non-admin user could use to query volumes. Default values are: [‘name’, ‘status’, ‘metadata’, ‘availability_zone’ ,’bootable’, ‘group_id’] |
transfer_api_class = cinder.transfer.api.API |
(String) The full class name of the volume transfer API class |
volume_api_class = cinder.volume.api.API |
(String) The full class name of the volume API class to use |
volume_name_prefix = openstack- |
(String) Prefix before volume name to differentiate DISCO volume created through openstack and the other ones |
volume_name_template = volume-%s |
(String) Template string to be used to generate volume names |
volume_number_multiplier = -1.0 |
(Floating point) Multiplier used for weighing volume number. Negative numbers mean to spread vs stack. |
volume_transfer_key_length = 16 |
(Integer) The number of characters in the autogenerated auth key. |
volume_transfer_salt_length = 8 |
(Integer) The number of characters in the salt. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[oslo_versionedobjects] | |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
auth_strategy = keystone |
(String) The strategy to use for auth. Supports noauth or keystone. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_api_class = cinder.backup.api.API |
(String) The full class name of the volume backup API class |
backup_compression_algorithm = zlib |
(String) Compression algorithm (None to disable) |
backup_driver = cinder.backup.drivers.swift |
(String) Driver to use for backups. |
backup_manager = cinder.backup.manager.BackupManager |
(String) Full class name for the Manager for volume backup |
backup_metadata_version = 2 |
(Integer) Backup metadata version to be used when backing up volume metadata. If this number is bumped, make sure the service doing the restore supports the new version. |
backup_name_template = backup-%s |
(String) Template string to be used to generate backup names |
backup_object_number_per_notification = 10 |
(Integer) The number of chunks or objects, for which one Ceilometer notification will be sent |
backup_service_inithost_offload = True |
(Boolean) Offload pending backup delete during backup service startup. If false, the backup service will remain down until all pending backups are deleted. |
backup_timer_interval = 120 |
(Integer) Interval, in seconds, between two progress notifications reporting the backup status |
backup_use_same_host = False |
(Boolean) Backup services use same backend. |
backup_use_temp_snapshot = False |
(Boolean) If this is set to True, the backup_use_temp_snapshot path will be used during the backup. Otherwise, it will use backup_use_temp_volume path. |
snapshot_check_timeout = 3600 |
(Integer) How long we check whether a snapshot is finished before we give up |
snapshot_name_template = snapshot-%s |
(String) Template string to be used to generate snapshot names |
snapshot_same_host = True |
(Boolean) Create volume from snapshot at the host where snapshot resides |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
available_devices = |
(List) List of all available devices |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allow_availability_zone_fallback = False |
(Boolean) If the requested Cinder availability zone is unavailable, fall back to the value of default_availability_zone, then storage_availability_zone, instead of failing. |
chap = disabled |
(String) CHAP authentication mode, effective only for iscsi (disabled|enabled) |
chap_password = |
(String) Password for specified CHAP account name. |
chap_username = |
(String) CHAP user name. |
chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf |
(String) Chiscsi (CXT) global defaults configuration file |
cinder_internal_tenant_project_id = None |
(String) ID of the project which will be used as the Cinder internal tenant. |
cinder_internal_tenant_user_id = None |
(String) ID of the user to be used in volume operations as the Cinder internal tenant. |
cluster = None |
(String) Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. Active-Active is not yet supported. |
compute_api_class = cinder.compute.nova.API |
(String) The full class name of the compute API class to use |
connection_type = iscsi |
(String) Connection type to the IBM Storage Array |
consistencygroup_api_class = cinder.consistencygroup.api.API |
(String) The full class name of the consistencygroup API class |
default_availability_zone = None |
(String) Default availability zone for new volumes. If not set, the storage_availability_zone option value is used as the default for new volumes. |
default_group_type = None |
(String) Default group type to use |
default_volume_type = None |
(String) Default volume type to use |
driver_client_cert = None |
(String) The path to the client certificate for verification, if the driver supports it. |
driver_client_cert_key = None |
(String) The path to the client certificate key for verification, if the driver supports it. |
driver_data_namespace = None |
(String) Namespace for driver private data values to be saved in. |
driver_ssl_cert_path = None |
(String) Can be used to specify a non default path to a CA_BUNDLE file or directory with certificates of trusted CAs, which will be used to validate the backend |
driver_ssl_cert_verify = False |
(Boolean) If set to True the http client will validate the SSL certificate of the backend endpoint. |
enable_force_upload = False |
(Boolean) Enables the Force option on upload_to_image. This enables running upload_volume on in-use volumes for backends that support it. |
enable_new_services = True |
(Boolean) Services to be added to the available pool on create |
enable_unsupported_driver = False |
(Boolean) Set this to True when you want to allow an unsupported driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release. |
end_time = None |
(String) If this option is specified then the end time specified is used instead of the end time of the last completed audit period. |
enforce_multipath_for_image_xfer = False |
(Boolean) If this is set to True, attachment of volumes for image transfer will be aborted when multipathd is not running. Otherwise, it will fallback to single path. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal. |
group_api_class = cinder.group.api.API |
(String) The full class name of the group API class |
host = localhost |
(String) Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address. |
iet_conf = /etc/iet/ietd.conf |
(String) IET configuration file |
iscsi_secondary_ip_addresses = |
(List) The list of secondary IP addresses of the iSCSI daemon |
max_over_subscription_ratio = 20.0 |
(Floating point) Float representation of the over subscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. If the ratio is 10.5, it means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. The ratio has to be a minimum of 1.0. |
monkey_patch = False |
(Boolean) Enable monkey patching |
monkey_patch_modules = |
(List) List of modules/decorators to monkey patch |
my_ip = 10.0.0.1 |
(String) IP address of this host |
no_snapshot_gb_quota = False |
(Boolean) Whether snapshots count against gigabyte quota |
num_shell_tries = 3 |
(Integer) Number of times to attempt to run flakey shell commands |
os_privileged_user_auth_url = None |
(String) Auth URL associated with the OpenStack privileged account. |
os_privileged_user_name = None |
(String) OpenStack privileged account username. Used for requests to other services (such as Nova) that require an account with special rights. |
os_privileged_user_password = None |
(String) Password associated with the OpenStack privileged account. |
os_privileged_user_tenant = None |
(String) Tenant name associated with the OpenStack privileged account. |
periodic_fuzzy_delay = 60 |
(Integer) Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) |
periodic_interval = 60 |
(Integer) Interval, in seconds, between running periodic tasks |
replication_api_class = cinder.replication.api.API |
(String) The full class name of the volume replication API class |
replication_device = None |
(Unknown) Multi opt of dictionaries to represent a replication target device. This option may be specified multiple times in a single config section to specify multiple replication target devices. Each entry takes the standard dict config form: replication_device = target_device_id:<required>,key1:value1,key2:value2... |
report_discard_supported = False |
(Boolean) Report to clients of Cinder that the backend supports discard (aka. trim/unmap). This will not actually change the behavior of the backend or the client directly, it will only notify that it can be used. |
report_interval = 10 |
(Integer) Interval, in seconds, between nodes reporting state to datastore |
reserved_percentage = 0 |
(Integer) The percentage of backend capacity is reserved |
rootwrap_config = /etc/cinder/rootwrap.conf |
(String) Path to the rootwrap configuration file to use for running commands as root |
send_actions = False |
(Boolean) Send the volume and snapshot create and delete notifications generated in the specified period. |
service_down_time = 60 |
(Integer) Maximum time since last check-in for a service to be considered up |
ssh_hosts_key_file = $state_path/ssh_known_hosts |
(String) File containing SSH host keys for the systems with which Cinder needs to communicate. OPTIONAL: Default=$state_path/ssh_known_hosts |
start_time = None |
(String) If this option is specified then the start time specified is used instead of the start time of the last completed audit period. |
state_path = /var/lib/cinder |
(String) Top-level directory for maintaining cinder’s state |
storage_availability_zone = nova |
(String) Availability zone of this node |
storage_protocol = iscsi |
(String) Protocol for transferring data between host and storage back-end. |
strict_ssh_host_key_policy = False |
(Boolean) Option to enable strict host key checking. When set to “True” Cinder will only connect to systems with a host key present in the configured “ssh_hosts_key_file”. When set to “False” the host key will be saved upon first connection and used for subsequent connections. Default=False |
suppress_requests_ssl_warnings = False |
(Boolean) Suppress requests library SSL certificate warnings. |
tcp_keepalive = True |
(Boolean) Sets the value of TCP_KEEPALIVE (True/False) for each server socket. |
tcp_keepalive_count = None |
(Integer) Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X. |
tcp_keepalive_interval = None |
(Integer) Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not supported on OS X. |
until_refresh = 0 |
(Integer) Count of reservations until usage is refreshed |
use_chap_auth = False |
(Boolean) Option to enable/disable CHAP authentication for targets. |
use_forwarded_for = False |
(Boolean) Treat X-Forwarded-For as the canonical remote address. Only enable this if you have a sanitizing proxy. |
[key_manager] | |
api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager |
(String) The full class name of the key manager API class |
fixed_key = None |
(String) Fixed key returned by key manager, specified in hex |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nova_api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to nova |
nova_ca_certificates_file = None |
(String) Location of ca certificates file to use for nova client requests. |
nova_catalog_admin_info = compute:Compute Service:adminURL |
(String) Same as nova_catalog_info, but for admin endpoint. |
nova_catalog_info = compute:Compute Service:publicURL |
(String) Match this value when searching for nova in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> |
nova_endpoint_admin_template = None |
(String) Same as nova_endpoint_template, but for admin endpoint. |
nova_endpoint_template = None |
(String) Override service catalog lookup with template for nova endpoint e.g. http://localhost:8774/v2/%(project_id)s |
os_region_name = None |
(String) Region name of this node |
Configuration option = Default value | Description |
---|---|
[coordination] | |
backend_url = file://$state_path |
(String) The backend URL to use for distributed coordination. |
heartbeat = 1.0 |
(Floating point) Number of seconds between heartbeats for distributed coordination. |
initial_reconnect_backoff = 0.1 |
(Floating point) Initial number of seconds to wait after failed reconnection. |
max_reconnect_backoff = 60.0 |
(Floating point) Maximum number of seconds between sequential reconnection retries. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
trace_flags = None |
(List) List of options that control which trace info is written to the DEBUG log level to assist developers. Valid values are method and api. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
drbdmanage_devs_on_controller = True |
(Boolean) If set, the c-vol node will receive a useable /dev/drbdX device, even if the actual data is stored on other nodes only. This is useful for debugging, maintenance, and to be able to do the iSCSI export from the c-vol node. |
drbdmanage_disk_options = {"c-min-rate": "4M"} |
(String) Disk options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. |
drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"} |
(String) Net options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. |
drbdmanage_redundancy = 1 |
(Integer) Number of nodes that should replicate the data. |
drbdmanage_resize_plugin = drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize |
(String) Volume resize completion wait plugin. |
drbdmanage_resize_policy = {"timeout": "60"} |
(String) Volume resize completion wait policy. |
drbdmanage_resource_options = {"auto-promote-timeout": "300"} |
(String) Resource options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. |
drbdmanage_resource_plugin = drbdmanage.plugins.plugins.wait_for.WaitForResource |
(String) Resource deployment completion wait plugin. |
drbdmanage_resource_policy = {"ratio": "0.51", "timeout": "60"} |
(String) Resource deployment completion wait policy. |
drbdmanage_snapshot_plugin = drbdmanage.plugins.plugins.wait_for.WaitForSnapshot |
(String) Snapshot completion wait plugin. |
drbdmanage_snapshot_policy = {"count": "1", "timeout": "60"} |
(String) Snapshot completion wait policy. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
check_max_pool_luns_threshold = False |
(Boolean) Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is reached. By default, the value is False. |
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml |
(String) use this file for cinder emc plugin config data |
destroy_empty_storage_group = False |
(Boolean) To destroy storage group when the last LUN is removed from it. By default, the value is False. |
force_delete_lun_in_storagegroup = False |
(Boolean) Delete a LUN even if it is in Storage Groups. By default, the value is False. |
initiator_auto_deregistration = False |
(Boolean) Automatically deregister initiators after the related storage group is destroyed. By default, the value is False. |
initiator_auto_registration = False |
(Boolean) Automatically register initiators. By default, the value is False. |
io_port_list = None |
(List) Comma separated iSCSI or FC ports to be used in Nova or Cinder. |
iscsi_initiators = None |
(String) Mapping between hostname and its iSCSI initiator IP addresses. |
max_luns_per_storage_group = 255 |
(Integer) Default max number of LUNs in a storage group. By default, the value is 255. |
naviseccli_path = None |
(String) Naviseccli Path. |
storage_vnx_authentication_type = global |
(String) VNX authentication scope type. By default, the value is global. |
storage_vnx_pool_names = None |
(List) Comma-separated list of storage pool names to be used. |
storage_vnx_security_file_dir = None |
(String) Directory path that contains the VNX security file. Make sure the security file is generated first. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml |
(String) config file for cinder eternus_dx volume driver |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
flashsystem_connection_protocol = FC |
(String) Connection protocol should be FC. (Default is FC.) |
flashsystem_iscsi_portid = 0 |
(Integer) Default iSCSI Port ID of FlashSystem. (Default port is 0.) |
flashsystem_multihostmap_enabled = True |
(Boolean) Allows vdisk to multi host mapping. (Default is True) |
flashsystem_multipath_enabled = False |
(Boolean) DEPRECATED: This option no longer has any affect. It is deprecated and will be removed in the next release. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hgst_net = Net 1 (IPv4) |
(String) Space network name to use for data transfer |
hgst_redundancy = 0 |
(String) Should spaces be redundantly stored (1/0) |
hgst_space_group = disk |
(String) Group to own created spaces |
hgst_space_mode = 0600 |
(String) UNIX mode for created spaces |
hgst_space_user = root |
(String) User to own created spaces |
hgst_storage_servers = os:gbd0 |
(String) Comma separated list of Space storage servers:devices. ex: os1_stor:gbd0,os2_stor:gbd0 |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hpelefthand_api_url = None |
(String) HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos |
hpelefthand_clustername = None |
(String) HPE LeftHand cluster name |
hpelefthand_debug = False |
(Boolean) Enable HTTP debugging to LeftHand |
hpelefthand_iscsi_chap_enabled = False |
(Boolean) Configure CHAP authentication for iSCSI connections (Default: Disabled) |
hpelefthand_password = None |
(String) HPE LeftHand Super user password |
hpelefthand_ssh_port = 16022 |
(Port number) Port number of SSH service. |
hpelefthand_username = None |
(String) HPE LeftHand Super user username |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hpexp_async_copy_check_interval = 10 |
(Integer) Interval to check copy asynchronously |
hpexp_compute_target_ports = None |
(List) Target port names of compute node for host group or iSCSI target |
hpexp_copy_check_interval = 3 |
(Integer) Interval to check copy |
hpexp_copy_speed = 3 |
(Integer) Copy speed of storage system |
hpexp_default_copy_method = FULL |
(String) Default copy method of storage system. There are two valid values: “FULL” specifies that a full copy; “THIN” specifies that a thin copy. Default value is “FULL” |
hpexp_group_request = False |
(Boolean) Request for creating host group or iSCSI target |
hpexp_horcm_add_conf = True |
(Boolean) Add to HORCM configuration |
hpexp_horcm_name_only_discovery = False |
(Boolean) Only discover a specific name of host group or iSCSI target |
hpexp_horcm_numbers = 200, 201 |
(List) Instance numbers for HORCM |
hpexp_horcm_resource_name = meta_resource |
(String) Resource group name of storage system for HORCM |
hpexp_horcm_user = None |
(String) Username of storage system for HORCM |
hpexp_ldev_range = None |
(String) Logical device range of storage system |
hpexp_pool = None |
(String) Pool of storage system |
hpexp_storage_cli = None |
(String) Type of storage command line interface |
hpexp_storage_id = None |
(String) ID of storage system |
hpexp_target_ports = None |
(List) Target port names for host group or iSCSI target |
hpexp_thin_pool = None |
(String) Thin pool of storage system |
hpexp_zoning_request = False |
(Boolean) Request for FC Zone creating host group |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml |
(String) The configuration file for the Cinder Huawei driver. |
hypermetro_devices = None |
(String) The remote device hypermetro will use. |
metro_domain_name = None |
(String) The remote metro device domain name. |
metro_san_address = None |
(String) The remote metro device request url. |
metro_san_password = None |
(String) The remote metro device san password. |
metro_san_user = None |
(String) The remote metro device san user. |
metro_storage_pools = None |
(String) The remote metro device pool names. |
Configuration option = Default value | Description |
---|---|
[hyperv] | |
force_volumeutils_v1 = False |
(Boolean) DEPRECATED: Force V1 volume utility class |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allowed_direct_url_schemes = |
(List) A list of url schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file]. |
glance_api_insecure = False |
(Boolean) Allow to perform insecure SSL (https) requests to glance (https will be used but cert validation will not be performed). |
glance_api_servers = None |
(List) A list of the URLs of glance API servers available to cinder ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to http. |
glance_api_ssl_compression = False |
(Boolean) Enables or disables negotiation of SSL layer compression. In some cases disabling compression can improve data throughput, such as when high network bandwidth is available and you use compressed image formats like qcow2. |
glance_api_version = 1 |
(Integer) Version of the glance API to use |
glance_ca_certificates_file = None |
(String) Location of ca certificates file to use for glance client requests. |
glance_catalog_info = image:glance:publicURL |
(String) Info to match when looking for glance in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if glance_api_servers are not provided. |
glance_core_properties = checksum, container_format, disk_format, image_name, image_id, min_disk, min_ram, name, size |
(List) Default core properties of image |
glance_num_retries = 0 |
(Integer) Number retries when downloading an image from glance |
glance_request_timeout = None |
(Integer) http/https timeout value for glance operations. If no value (None) is supplied here, the glanceclient default value is used. |
image_conversion_dir = $state_path/conversion |
(String) Directory used for temporary storage during image conversion |
image_upload_use_cinder_backend = False |
(Boolean) If set to True, upload-to-image in raw format will create a cloned volume and register its location to the image service, instead of uploading the volume content. The cinder backend and locations support must be enabled in the image service, and glance_api_version must be set to 2. |
image_upload_use_internal_tenant = False |
(Boolean) If set to True, the image volume created by upload-to-image will be placed in the internal tenant. Otherwise, the image volume is created in the current context’s tenant. |
image_volume_cache_enabled = False |
(Boolean) Enable the image volume cache for this backend. |
image_volume_cache_max_count = 0 |
(Integer) Max number of entries allowed in the image volume cache. 0 => unlimited. |
image_volume_cache_max_size_gb = 0 |
(Integer) Max size of the image volume cache for this backend in GB. 0 => unlimited. |
use_multipath_for_image_xfer = False |
(Boolean) Do we attach/detach volumes in cinder using multipath for volume to image and image to volume transfers? |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
infortrend_cli_max_retries = 5 |
(Integer) Maximum retry time for cli. Default is 5. |
infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar |
(String) The Infortrend CLI absolute path. By default, it is at /opt/bin/Infortrend/raidcmd_ESDS10.jar |
infortrend_cli_timeout = 30 |
(Integer) Default timeout for CLI copy operations in minutes. Support: migrate volume, create cloned volume and create volume from snapshot. By Default, it is 30 minutes. |
infortrend_pools_name = |
(String) Infortrend raid pool name list. It is separated with comma. |
infortrend_provisioning = full |
(String) Let the volume use specific provisioning. By default, it is the full provisioning. The supported options are full or thin. |
infortrend_slots_a_channels_id = 0,1,2,3,4,5,6,7 |
(String) Infortrend raid channel ID list on Slot A for OpenStack usage. It is separated with comma. By default, it is the channel 0~7. |
infortrend_slots_b_channels_id = 0,1,2,3,4,5,6,7 |
(String) Infortrend raid channel ID list on Slot B for OpenStack usage. It is separated with comma. By default, it is the channel 0~7. |
infortrend_tiering = 0 |
(String) Let the volume use specific tiering level. By default, it is the level 0. The supported levels are 0,2,3,4. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nas_host = |
(String) IP address or Hostname of NAS system. |
nas_login = admin |
(String) User name to connect to NAS system. |
nas_mount_options = None |
(String) Options used to mount the storage backend file system where Cinder volumes are stored. |
nas_password = |
(String) Password to connect to NAS system. |
nas_private_key = |
(String) Filename of private key to use for SSH authentication. |
nas_secure_file_operations = auto |
(String) Allow network-attached storage systems to operate in a secure environment where root level access is not permitted. If set to False, access is as the root user and insecure. If set to True, access is not as root. If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. |
nas_secure_file_permissions = auto |
(String) Set more secure file permissions on network-attached storage volume files to restrict broad other/world access. If set to False, volumes are created with open permissions. If set to True, volumes are created with permissions for the cinder user and group (660). If set to auto, a check is done to determine if this is a new installation: True is used if so, otherwise False. Default is auto. |
nas_share_path = |
(String) Path to the share to use for storing Cinder volumes. For example: “/srv/export1” for an NFS server export available at 10.0.5.10:/srv/export1 . |
nas_ssh_port = 22 |
(Port number) SSH port to use to connect to NAS system. |
Configuration option = Default value | Description |
---|---|
[profiler] | |
connection_string = messaging:// |
(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values:
|
enabled = False |
(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values:
|
hmac_keys = SECRET_KEY |
(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
trace_sqlalchemy = False |
(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
pure_api_token = None |
(String) REST API authorization token. |
pure_automatic_max_oversubscription_ratio = True |
(Boolean) Automatically determine an oversubscription ratio based on the current total data reduction values. If used this calculated value will override the max_over_subscription_ratio config option. |
pure_eradicate_on_delete = False |
(Boolean) When enabled, all Pure volumes, snapshots, and protection groups will be eradicated at the time of deletion in Cinder. Data will NOT be recoverable after a delete with this set to True! When disabled, volumes and snapshots will go into pending eradication state and can be recovered. |
pure_replica_interval_default = 900 |
(Integer) Snapshot replication interval in seconds. |
pure_replica_retention_long_term_default = 7 |
(Integer) Retain snapshots per day on target for this time (in days.) |
pure_replica_retention_long_term_per_day_default = 3 |
(Integer) Retain how many snapshots for each day. |
pure_replica_retention_short_term_default = 14400 |
(Integer) Retain all snapshots on target for this time (in seconds.) |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
max_age = 0 |
(Integer) Number of seconds between subsequent usage refreshes |
quota_backup_gigabytes = 1000 |
(Integer) Total amount of storage, in gigabytes, allowed for backups per project |
quota_backups = 10 |
(Integer) Number of volume backups allowed per project |
quota_consistencygroups = 10 |
(Integer) Number of consistencygroups allowed per project |
quota_driver = cinder.quota.DbQuotaDriver |
(String) Default driver to use for quota checks |
quota_gigabytes = 1000 |
(Integer) Total amount of storage, in gigabytes, allowed for volumes and snapshots per project |
quota_groups = 10 |
(Integer) Number of groups allowed per project |
quota_snapshots = 10 |
(Integer) Number of volume snapshots allowed per project |
quota_volumes = 10 |
(Integer) Number of volumes allowed per project |
reservation_expire = 86400 |
(Integer) Number of seconds until a reservation expires |
use_default_quota_class = True |
(Boolean) Enables or disables use of default quota class with default quota. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
san_clustername = |
(String) Cluster name to use for creating volumes |
san_ip = |
(String) IP address of SAN controller |
san_is_local = False |
(Boolean) Execute commands locally instead of over SSH; use if the volume service is running on the SAN device |
san_login = admin |
(String) Username for SAN controller |
san_password = |
(String) Password for SAN controller |
san_private_key = |
(String) Filename of private key to use for SSH authentication |
san_ssh_port = 22 |
(Port number) SSH port to use with SAN |
san_thin_provision = True |
(Boolean) Use thin provisioning for SAN volumes? |
ssh_conn_timeout = 30 |
(Integer) SSH connection timeout in seconds |
ssh_max_pool_conn = 5 |
(Integer) Maximum ssh connections in the pool |
ssh_min_pool_conn = 1 |
(Integer) Minimum ssh connections in the pool |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
filter_function = None |
(String) String representation for an equation that will be used to filter hosts. Only used when the driver filter is set to be used by the Cinder scheduler. |
goodness_function = None |
(String) String representation for an equation that will be used to determine the goodness of a host. Only used when using the goodness weigher is set to be used by the Cinder scheduler. |
scheduler_default_filters = AvailabilityZoneFilter, CapacityFilter, CapabilitiesFilter |
(List) Which filter class names to use for filtering hosts when not specified in the request. |
scheduler_default_weighers = CapacityWeigher |
(List) Which weigher class names to use for weighing hosts. |
scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler |
(String) Default scheduler driver to use |
scheduler_host_manager = cinder.scheduler.host_manager.HostManager |
(String) The scheduler host manager class to use |
scheduler_json_config_location = |
(String) Absolute path to scheduler configuration JSON file. |
scheduler_manager = cinder.scheduler.manager.SchedulerManager |
(String) Full class name for the Manager for scheduler |
scheduler_max_attempts = 3 |
(Integer) Maximum number of attempts to schedule a volume |
scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler |
(String) Which handler to use for selecting the host/pool after weighing |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
scst_target_driver = iscsi |
(String) SCST target implementation can choose from multiple SCST target drivers. |
scst_target_iqn_name = None |
(String) Certain ISCSI targets have predefined target names, SCST target driver uses this name. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allocated_capacity_weight_multiplier = -1.0 |
(Floating point) Multiplier used for weighing allocated capacity. Positive numbers mean to stack vs spread. |
capacity_weight_multiplier = 1.0 |
(Floating point) Multiplier used for weighing free capacity. Negative numbers mean to stack vs spread. |
enabled_backends = None |
(List) A list of backend names to use. These backend names should be backed by a unique [CONFIG] group with its options |
iscsi_helper = tgtadm |
(String) iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise Target, iscsictl for Chelsio iSCSI Target or fake for testing. |
iscsi_iotype = fileio |
(String) Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device |
iscsi_ip_address = $my_ip |
(String) The IP address that the iSCSI daemon is listening on |
iscsi_port = 3260 |
(Port number) The port that the iSCSI daemon is listening on |
iscsi_protocol = iscsi |
(String) Determines the iSCSI protocol for new iSCSI volumes, created with tgtadm or lioadm target helpers. In order to enable RDMA, this parameter should be set with the value “iser”. The supported iSCSI protocol values are “iscsi” and “iser”. |
iscsi_target_flags = |
(String) Sets the target-specific flags for the iSCSI target. Only used for tgtadm to specify backing device flags using bsoflags option. The specified string is passed as is to the underlying tool. |
iscsi_target_prefix = iqn.2010-10.org.openstack: |
(String) Prefix for iSCSI volumes |
iscsi_write_cache = on |
(String) Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter is valid if iscsi_helper is set to tgtadm. |
iser_helper = tgtadm |
(String) The name of the iSER target user-land tool to use |
iser_ip_address = $my_ip |
(String) The IP address that the iSER daemon is listening on |
iser_port = 3260 |
(Port number) The port that the iSER daemon is listening on |
iser_target_prefix = iqn.2010-10.org.openstack: |
(String) Prefix for iSER volumes |
migration_create_volume_timeout_secs = 300 |
(Integer) Timeout for creating the volume to migrate to when performing volume migration (seconds) |
num_iser_scan_tries = 3 |
(Integer) The maximum number of times to rescan iSER targetto find volume |
num_volume_device_scan_tries = 3 |
(Integer) The maximum number of times to rescan targets to find volume |
volume_backend_name = None |
(String) The backend name for a given driver implementation |
volume_clear = zero |
(String) Method used to wipe old volumes |
volume_clear_ionice = None |
(String) The flag to pass to ionice to alter the i/o priority of the process used to zero a volume after deletion, for example “-c3” for idle only priority. |
volume_clear_size = 0 |
(Integer) Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all |
volume_copy_blkio_cgroup_name = cinder-volume-copy |
(String) The blkio cgroup name to be used to limit bandwidth of volume copy |
volume_copy_bps_limit = 0 |
(Integer) The upper limit of bandwidth of volume copy. 0 => unlimited |
volume_dd_blocksize = 1M |
(String) The default block size used when copying/clearing volumes |
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver |
(String) Driver to use for volume creation |
volume_manager = cinder.volume.manager.VolumeManager |
(String) Full class name for the Manager for volume |
volume_service_inithost_offload = False |
(Boolean) Offload pending volume delete during volume service startup |
volume_usage_audit_period = month |
(String) Time period for which to generate volume usages. The options are hour, day, month, or year. |
volumes_dir = $state_path/volumes |
(String) Volume configuration file storage directory |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
tegile_default_pool = None |
(String) Create volumes in this pool |
tegile_default_project = None |
(String) Create volumes in this project |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cloned_volume_same_az = True |
(Boolean) Ensure that the new volumes are the same AZ as snapshot or source volume |
All the files in this section can be found in /etc/cinder
.
The cinder.conf
file is installed in /etc/cinder
by default.
When you manually install the Block Storage service, the options in the
cinder.conf
file are set to default values.
The cinder.conf
file contains most of the options needed to configure
the Block Storage service. You can generate the latest configuration file
by using the tox provided by the Block Storage service. Here is a sample
configuration file:
[DEFAULT]
#
# From cinder
#
# Backup metadata version to be used when backing up volume metadata. If this
# number is bumped, make sure the service doing the restore supports the new
# version. (integer value)
#backup_metadata_version = 2
# The number of chunks or objects, for which one Ceilometer notification will
# be sent (integer value)
#backup_object_number_per_notification = 10
# Interval, in seconds, between two progress notifications reporting the backup
# status (integer value)
#backup_timer_interval = 120
# Name of this cluster. Used to group volume hosts that share the same backend
# configurations to work in HA Active-Active mode. Active-Active is not yet
# supported. (string value)
#cluster = <None>
# Management IP address of HNAS. This can be any IP in the admin address on
# HNAS or the SMU IP. (IP address value)
#hnas_mgmt_ip0 = <None>
# Command to communicate to HNAS. (string value)
#hnas_ssc_cmd = ssc
# HNAS username. (string value)
#hnas_username = <None>
# HNAS password. (string value)
#hnas_password = <None>
# Port to be used for SSH authentication. (port value)
# Minimum value: 0
# Maximum value: 65535
#hnas_ssh_port = 22
# Path to the SSH private key used to authenticate in HNAS SMU. (string value)
#hnas_ssh_private_key = <None>
# The IP of the HNAS cluster admin. Required only for HNAS multi-cluster
# setups. (string value)
#hnas_cluster_admin_ip0 = <None>
# Service 0 volume type (string value)
#hnas_svc0_volume_type = <None>
# Service 0 HDP (string value)
#hnas_svc0_hdp = <None>
# Service 1 volume type (string value)
#hnas_svc1_volume_type = <None>
# Service 1 HDP (string value)
#hnas_svc1_hdp = <None>
# Service 2 volume type (string value)
#hnas_svc2_volume_type = <None>
# Service 2 HDP (string value)
#hnas_svc2_hdp = <None>
# Service 3 volume type (string value)
#hnas_svc3_volume_type = <None>
# Service 3 HDP (string value)
#hnas_svc3_hdp = <None>
# The maximum number of items that a collection resource returns in a single
# response (integer value)
#osapi_max_limit = 1000
# Base URL that will be presented to users in links to the OpenStack Volume API
# (string value)
# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
#osapi_volume_base_URL = <None>
# Volume filter options which non-admin user could use to query volumes.
# Default values are: ['name', 'status', 'metadata', 'availability_zone'
# ,'bootable', 'group_id'] (list value)
#query_volume_filters = name,status,metadata,availability_zone,bootable,group_id
# Ceph configuration file to use. (string value)
#backup_ceph_conf = /etc/ceph/ceph.conf
# The Ceph user to connect with. Default here is to use the same user as for
# Cinder volumes. If not using cephx this should be set to None. (string value)
#backup_ceph_user = cinder
# The chunk size, in bytes, that a backup is broken into before transfer to the
# Ceph object store. (integer value)
#backup_ceph_chunk_size = 134217728
# The Ceph pool where volume backups are stored. (string value)
#backup_ceph_pool = backups
# RBD stripe unit to use when creating a backup image. (integer value)
#backup_ceph_stripe_unit = 0
# RBD stripe count to use when creating a backup image. (integer value)
#backup_ceph_stripe_count = 0
# If True, always discard excess bytes when restoring volumes i.e. pad with
# zeroes. (boolean value)
#restore_discard_excess_bytes = true
# File with the list of available smbfs shares. (string value)
#smbfs_shares_config = /etc/cinder/smbfs_shares
# The path of the automatically generated file containing information about
# volume disk space allocation. (string value)
#smbfs_allocation_info_file_path = $state_path/allocation_data
# Default format that will be used when creating volumes if no volume format is
# specified. (string value)
# Allowed values: raw, qcow2, vhd, vhdx
#smbfs_default_volume_format = qcow2
# Create volumes as sparsed files which take no space rather than regular files
# when using raw format, in which case volume creation takes lot of time.
# (boolean value)
#smbfs_sparsed_volumes = true
# Percent of ACTUAL usage of the underlying volume before no new volumes can be
# allocated to the volume destination. (floating point value)
#smbfs_used_ratio = 0.95
# This will compare the allocated to available space on the volume destination.
# If the ratio exceeds this number, the destination will no longer be valid.
# (floating point value)
#smbfs_oversub_ratio = 1.0
# Base dir containing mount points for smbfs shares. (string value)
#smbfs_mount_point_base = $state_path/mnt
# Mount options passed to the smbfs client. See mount.cifs man page for
# details. (string value)
#smbfs_mount_options = noperm,file_mode=0775,dir_mode=0775
# Compression algorithm (None to disable) (string value)
#backup_compression_algorithm = zlib
# Use thin provisioning for SAN volumes? (boolean value)
#san_thin_provision = true
# IP address of SAN controller (string value)
#san_ip =
# Username for SAN controller (string value)
#san_login = admin
# Password for SAN controller (string value)
#san_password =
# Filename of private key to use for SSH authentication (string value)
#san_private_key =
# Cluster name to use for creating volumes (string value)
#san_clustername =
# SSH port to use with SAN (port value)
# Minimum value: 0
# Maximum value: 65535
#san_ssh_port = 22
# Execute commands locally instead of over SSH; use if the volume service is
# running on the SAN device (boolean value)
#san_is_local = false
# SSH connection timeout in seconds (integer value)
#ssh_conn_timeout = 30
# Minimum ssh connections in the pool (integer value)
#ssh_min_pool_conn = 1
# Maximum ssh connections in the pool (integer value)
#ssh_max_pool_conn = 5
# DEPRECATED: Legacy configuration file for HNAS NFS Cinder plugin. This is not
# needed if you fill all configuration on cinder.conf (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#hds_hnas_nfs_config_file = /opt/hds/hnas/cinder_nfs_conf.xml
# Sets the value of TCP_KEEPALIVE (True/False) for each server socket. (boolean
# value)
#tcp_keepalive = true
# Sets the value of TCP_KEEPINTVL in seconds for each server socket. Not
# supported on OS X. (integer value)
#tcp_keepalive_interval = <None>
# Sets the value of TCP_KEEPCNT for each server socket. Not supported on OS X.
# (integer value)
#tcp_keepalive_count = <None>
# Option to enable strict host key checking. When set to "True" Cinder will
# only connect to systems with a host key present in the configured
# "ssh_hosts_key_file". When set to "False" the host key will be saved upon
# first connection and used for subsequent connections. Default=False (boolean
# value)
#strict_ssh_host_key_policy = false
# File containing SSH host keys for the systems with which Cinder needs to
# communicate. OPTIONAL: Default=$state_path/ssh_known_hosts (string value)
#ssh_hosts_key_file = $state_path/ssh_known_hosts
# The storage family type used on the storage system; valid values are
# ontap_7mode for using Data ONTAP operating in 7-Mode, ontap_cluster for using
# clustered Data ONTAP, or eseries for using E-Series. (string value)
# Allowed values: ontap_7mode, ontap_cluster, eseries
#netapp_storage_family = ontap_cluster
# The storage protocol to be used on the data path with the storage system.
# (string value)
# Allowed values: iscsi, fc, nfs
#netapp_storage_protocol = <None>
# The hostname (or IP address) for the storage system or proxy server. (string
# value)
#netapp_server_hostname = <None>
# The TCP port to use for communication with the storage system or proxy
# server. If not specified, Data ONTAP drivers will use 80 for HTTP and 443 for
# HTTPS; E-Series will use 8080 for HTTP and 8443 for HTTPS. (integer value)
#netapp_server_port = <None>
# The transport protocol used when communicating with the storage system or
# proxy server. (string value)
# Allowed values: http, https
#netapp_transport_type = http
# Administrative user account name used to access the storage system or proxy
# server. (string value)
#netapp_login = <None>
# Password for the administrative user account specified in the netapp_login
# option. (string value)
#netapp_password = <None>
# This option specifies the virtual storage server (Vserver) name on the
# storage cluster on which provisioning of block storage volumes should occur.
# (string value)
#netapp_vserver = <None>
# The vFiler unit on which provisioning of block storage volumes will be done.
# This option is only used by the driver when connecting to an instance with a
# storage family of Data ONTAP operating in 7-Mode. Only use this option when
# utilizing the MultiStore feature on the NetApp storage system. (string value)
#netapp_vfiler = <None>
# The name of the config.conf stanza for a Data ONTAP (7-mode) HA partner.
# This option is only used by the driver when connecting to an instance with a
# storage family of Data ONTAP operating in 7-Mode, and it is required if the
# storage protocol selected is FC. (string value)
#netapp_partner_backend_name = <None>
# The quantity to be multiplied by the requested volume size to ensure enough
# space is available on the virtual storage server (Vserver) to fulfill the
# volume creation request. Note: this option is deprecated and will be removed
# in favor of "reserved_percentage" in the Mitaka release. (floating point
# value)
#netapp_size_multiplier = 1.2
# This option determines if storage space is reserved for LUN allocation. If
# enabled, LUNs are thick provisioned. If space reservation is disabled,
# storage space is allocated on demand. (string value)
# Allowed values: enabled, disabled
#netapp_lun_space_reservation = enabled
# If the percentage of available space for an NFS share has dropped below the
# value specified by this option, the NFS image cache will be cleaned. (integer
# value)
#thres_avl_size_perc_start = 20
# When the percentage of available space on an NFS share has reached the
# percentage specified by this option, the driver will stop clearing files from
# the NFS image cache that have not been accessed in the last M minutes, where
# M is the value of the expiry_thres_minutes configuration option. (integer
# value)
#thres_avl_size_perc_stop = 60
# This option specifies the threshold for last access time for images in the
# NFS image cache. When a cache cleaning cycle begins, images in the cache that
# have not been accessed in the last M minutes, where M is the value of this
# parameter, will be deleted from the cache to create free space on the NFS
# share. (integer value)
#expiry_thres_minutes = 720
# This option is used to specify the path to the E-Series proxy application on
# a proxy server. The value is combined with the value of the
# netapp_transport_type, netapp_server_hostname, and netapp_server_port options
# to create the URL used by the driver to connect to the proxy application.
# (string value)
#netapp_webservice_path = /devmgr/v2
# This option is only utilized when the storage family is configured to
# eseries. This option is used to restrict provisioning to the specified
# controllers. Specify the value of this option to be a comma separated list of
# controller hostnames or IP addresses to be used for provisioning. (string
# value)
#netapp_controller_ips = <None>
# Password for the NetApp E-Series storage array. (string value)
#netapp_sa_password = <None>
# This option specifies whether the driver should allow operations that require
# multiple attachments to a volume. An example would be live migration of
# servers that have volumes attached. When enabled, this backend is limited to
# 256 total volumes in order to guarantee volumes can be accessed by more than
# one host. (boolean value)
#netapp_enable_multiattach = false
# This option specifies the path of the NetApp copy offload tool binary. Ensure
# that the binary has execute permissions set which allow the effective user of
# the cinder-volume process to execute the file. (string value)
#netapp_copyoffload_tool_path = <None>
# This option defines the type of operating system that will access a LUN
# exported from Data ONTAP; it is assigned to the LUN at the time it is
# created. (string value)
#netapp_lun_ostype = <None>
# This option defines the type of operating system for all initiators that can
# access a LUN. This information is used when mapping LUNs to individual hosts
# or groups of hosts. (string value)
# Deprecated group/name - [DEFAULT]/netapp_eseries_host_type
#netapp_host_type = <None>
# This option is used to restrict provisioning to the specified pools. Specify
# the value of this option to be a regular expression which will be applied to
# the names of objects from the storage backend which represent pools in
# Cinder. This option is only utilized when the storage protocol is configured
# to use iSCSI or FC. (string value)
# Deprecated group/name - [DEFAULT]/netapp_volume_list
# Deprecated group/name - [DEFAULT]/netapp_storage_pools
#netapp_pool_name_search_pattern = (.+)
# Multi opt of dictionaries to represent the aggregate mapping between source
# and destination back ends when using whole back end replication. For every
# source aggregate associated with a cinder pool (NetApp FlexVol), you would
# need to specify the destination aggregate on the replication target device. A
# replication target device is configured with the configuration option
# replication_device. Specify this option as many times as you have replication
# devices. Each entry takes the standard dict config form:
# netapp_replication_aggregate_map =
# backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,...
# (dict value)
#netapp_replication_aggregate_map = <None>
# The maximum time in seconds to wait for existing SnapMirror transfers to
# complete before aborting during a failover. (integer value)
# Minimum value: 0
#netapp_snapmirror_quiesce_timeout = 3600
# Configure CHAP authentication for iSCSI connections (Default: Enabled)
# (boolean value)
#storwize_svc_iscsi_chap_enabled = true
# Base dir containing mount point for gluster share. (string value)
#glusterfs_backup_mount_point = $state_path/backup_mount
# GlusterFS share in <hostname|ipv4addr|ipv6addr>:<gluster_vol_name> format.
# Eg: 1.2.3.4:backup_vol (string value)
#glusterfs_backup_share = <None>
# Rest Gateway IP or FQDN for Scaleio (string value)
#coprhd_scaleio_rest_gateway_host = None
# Rest Gateway Port for Scaleio (port value)
# Minimum value: 0
# Maximum value: 65535
#coprhd_scaleio_rest_gateway_port = 4984
# Username for Rest Gateway (string value)
#coprhd_scaleio_rest_server_username = <None>
# Rest Gateway Password (string value)
#coprhd_scaleio_rest_server_password = <None>
# verify server certificate (boolean value)
#scaleio_verify_server_certificate = false
# Server certificate path (string value)
#scaleio_server_certificate_path = <None>
# Volume prefix for the backup id when backing up to TSM (string value)
#backup_tsm_volume_prefix = backup
# TSM password for the running username (string value)
#backup_tsm_password = password
# Enable or Disable compression for backups (boolean value)
#backup_tsm_compression = true
# config file for cinder eternus_dx volume driver (string value)
#cinder_eternus_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml
# Specifies the path of the GPFS directory where Block Storage volume and
# snapshot files are stored. (string value)
#gpfs_mount_point_base = <None>
# Specifies the path of the Image service repository in GPFS. Leave undefined
# if not storing images in GPFS. (string value)
#gpfs_images_dir = <None>
# Specifies the type of image copy to be used. Set this when the Image service
# repository also uses GPFS so that image files can be transferred efficiently
# from the Image service to the Block Storage service. There are two valid
# values: "copy" specifies that a full copy of the image is made;
# "copy_on_write" specifies that copy-on-write optimization strategy is used
# and unmodified blocks of the image file are shared efficiently. (string
# value)
# Allowed values: copy, copy_on_write, <None>
#gpfs_images_share_mode = <None>
# Specifies an upper limit on the number of indirections required to reach a
# specific block due to snapshots or clones. A lengthy chain of copy-on-write
# snapshots or clones can have a negative impact on performance, but improves
# space utilization. 0 indicates unlimited clone depth. (integer value)
#gpfs_max_clone_depth = 0
# Specifies that volumes are created as sparse files which initially consume no
# space. If set to False, the volume is created as a fully allocated file, in
# which case, creation may take a significantly longer time. (boolean value)
#gpfs_sparse_volumes = true
# Specifies the storage pool that volumes are assigned to. By default, the
# system storage pool is used. (string value)
#gpfs_storage_pool = system
# Main controller IP. (IP address value)
#zteControllerIP0 = <None>
# Slave controller IP. (IP address value)
#zteControllerIP1 = <None>
# Local IP. (IP address value)
#zteLocalIP = <None>
# User name. (string value)
#zteUserName = <None>
# User password. (string value)
#zteUserPassword = <None>
# Virtual block size of pool. Unit : KB. Valid value : 4, 8, 16, 32, 64, 128,
# 256, 512. (integer value)
#zteChunkSize = 4
# Cache readahead size. (integer value)
#zteAheadReadSize = 8
# Cache policy. 0, Write Back; 1, Write Through. (integer value)
#zteCachePolicy = 1
# SSD cache switch. 0, OFF; 1, ON. (integer value)
#zteSSDCacheSwitch = 1
# Pool name list. (list value)
#zteStoragePool =
# Pool volume allocated policy. 0, Auto; 1, High Performance Tier First; 2,
# Performance Tier First; 3, Capacity Tier First. (integer value)
#ztePoolVoAllocatedPolicy = 0
# Pool volume move policy.0, Auto; 1, Highest Available; 2, Lowest Available;
# 3, No Relocation. (integer value)
#ztePoolVolMovePolicy = 0
# Whether it is a thin volume. (integer value)
#ztePoolVolIsThin = False
# Pool volume init allocated Capacity.Unit : KB. (integer value)
#ztePoolVolInitAllocatedCapacity = 0
# Pool volume alarm threshold. [0, 100] (integer value)
#ztePoolVolAlarmThreshold = 0
# Pool volume alarm stop allocated flag. (integer value)
#ztePoolVolAlarmStopAllocatedFlag = 0
# Global backend request timeout, in seconds. (integer value)
#violin_request_timeout = 300
# Storage pools to be used to setup dedup luns only.(Comma separated list)
# (list value)
#violin_dedup_only_pools =
# Storage pools capable of dedup and other luns.(Comma separated list) (list
# value)
#violin_dedup_capable_pools =
# Method of choosing a storage pool for a lun. (string value)
# Allowed values: random, largest, smallest
#violin_pool_allocation_method = random
# Target iSCSI addresses to use.(Comma separated list) (list value)
#violin_iscsi_target_ips =
# IP address of Nexenta SA (string value)
#nexenta_host =
# HTTP port to connect to Nexenta REST API server (integer value)
#nexenta_rest_port = 8080
# Use http or https for REST connection (default auto) (string value)
# Allowed values: http, https, auto
#nexenta_rest_protocol = auto
# User name to connect to Nexenta SA (string value)
#nexenta_user = admin
# Password to connect to Nexenta SA (string value)
#nexenta_password = nexenta
# Nexenta target portal port (integer value)
#nexenta_iscsi_target_portal_port = 3260
# SA Pool that holds all volumes (string value)
#nexenta_volume = cinder
# IQN prefix for iSCSI targets (string value)
#nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder-
# Prefix for iSCSI target groups on SA (string value)
#nexenta_target_group_prefix = cinder/
# Volume group for ns5 (string value)
#nexenta_volume_group = iscsi
# Compression value for new ZFS folders. (string value)
# Allowed values: on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8, gzip-9, lzjb, zle, lz4
#nexenta_dataset_compression = on
# Deduplication value for new ZFS folders. (string value)
# Allowed values: on, off, sha256, verify, sha256, verify
#nexenta_dataset_dedup = off
# Human-readable description for the folder. (string value)
#nexenta_dataset_description =
# Block size for datasets (integer value)
#nexenta_blocksize = 4096
# Block size for datasets (integer value)
#nexenta_ns5_blocksize = 32
# Enables or disables the creation of sparse datasets (boolean value)
#nexenta_sparse = false
# File with the list of available nfs shares (string value)
#nexenta_shares_config = /etc/cinder/nfs_shares
# Base directory that contains NFS share mount points (string value)
#nexenta_mount_point_base = $state_path/mnt
# Enables or disables the creation of volumes as sparsed files that take no
# space. If disabled (False), volume is created as a regular file, which takes
# a long time. (boolean value)
#nexenta_sparsed_volumes = true
# If set True cache NexentaStor appliance volroot option value. (boolean value)
#nexenta_nms_cache_volroot = true
# Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best
# compression. (integer value)
#nexenta_rrmgr_compression = 0
# TCP Buffer size in KiloBytes. (integer value)
#nexenta_rrmgr_tcp_buf_size = 4096
# Number of TCP connections. (integer value)
#nexenta_rrmgr_connections = 2
# NexentaEdge logical path of directory to store symbolic links to NBDs (string
# value)
#nexenta_nbd_symlinks_dir = /dev/disk/by-path
# IP address of NexentaEdge management REST API endpoint (string value)
#nexenta_rest_address =
# User name to connect to NexentaEdge (string value)
#nexenta_rest_user = admin
# Password to connect to NexentaEdge (string value)
#nexenta_rest_password = nexenta
# NexentaEdge logical path of bucket for LUNs (string value)
#nexenta_lun_container =
# NexentaEdge iSCSI service name (string value)
#nexenta_iscsi_service =
# NexentaEdge iSCSI Gateway client address for non-VIP service (string value)
#nexenta_client_address =
# NexentaEdge iSCSI LUN object chunk size (integer value)
#nexenta_chunksize = 32768
# Make exception message format errors fatal. (boolean value)
#fatal_exception_format_errors = false
# IP address of this host (string value)
#my_ip = 10.0.2.15
# A list of the URLs of glance API servers available to cinder
# ([http[s]://][hostname|ip]:port). If protocol is not specified it defaults to
# http. (list value)
#glance_api_servers = <None>
# Version of the glance API to use (integer value)
#glance_api_version = 1
# Number retries when downloading an image from glance (integer value)
# Minimum value: 0
#glance_num_retries = 0
# Allow to perform insecure SSL (https) requests to glance (https will be used
# but cert validation will not be performed). (boolean value)
#glance_api_insecure = false
# Enables or disables negotiation of SSL layer compression. In some cases
# disabling compression can improve data throughput, such as when high network
# bandwidth is available and you use compressed image formats like qcow2.
# (boolean value)
#glance_api_ssl_compression = false
# Location of ca certificates file to use for glance client requests. (string
# value)
#glance_ca_certificates_file = <None>
# http/https timeout value for glance operations. If no value (None) is
# supplied here, the glanceclient default value is used. (integer value)
#glance_request_timeout = <None>
# DEPRECATED: Deploy v1 of the Cinder API. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#enable_v1_api = true
# DEPRECATED: Deploy v2 of the Cinder API. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#enable_v2_api = true
# Deploy v3 of the Cinder API. (boolean value)
#enable_v3_api = true
# Enables or disables rate limit of the API. (boolean value)
#api_rate_limit = true
# Specify list of extensions to load when using osapi_volume_extension option
# with cinder.api.contrib.select_extensions (list value)
#osapi_volume_ext_list =
# osapi volume extension to load (multi valued)
#osapi_volume_extension = cinder.api.contrib.standard_extensions
# Full class name for the Manager for volume (string value)
#volume_manager = cinder.volume.manager.VolumeManager
# Full class name for the Manager for volume backup (string value)
#backup_manager = cinder.backup.manager.BackupManager
# Full class name for the Manager for scheduler (string value)
#scheduler_manager = cinder.scheduler.manager.SchedulerManager
# Name of this node. This can be an opaque identifier. It is not necessarily a
# host name, FQDN, or IP address. (string value)
#host = openstack-VirtualBox
# Availability zone of this node (string value)
#storage_availability_zone = nova
# Default availability zone for new volumes. If not set, the
# storage_availability_zone option value is used as the default for new
# volumes. (string value)
#default_availability_zone = <None>
# If the requested Cinder availability zone is unavailable, fall back to the
# value of default_availability_zone, then storage_availability_zone, instead
# of failing. (boolean value)
#allow_availability_zone_fallback = false
# Default volume type to use (string value)
#default_volume_type = <None>
# Default group type to use (string value)
#default_group_type = <None>
# Time period for which to generate volume usages. The options are hour, day,
# month, or year. (string value)
#volume_usage_audit_period = month
# Path to the rootwrap configuration file to use for running commands as root
# (string value)
#rootwrap_config = /etc/cinder/rootwrap.conf
# Enable monkey patching (boolean value)
#monkey_patch = false
# List of modules/decorators to monkey patch (list value)
#monkey_patch_modules =
# Maximum time since last check-in for a service to be considered up (integer
# value)
#service_down_time = 60
# The full class name of the volume API class to use (string value)
#volume_api_class = cinder.volume.api.API
# The full class name of the volume backup API class (string value)
#backup_api_class = cinder.backup.api.API
# The strategy to use for auth. Supports noauth or keystone. (string value)
# Allowed values: noauth, keystone
#auth_strategy = keystone
# A list of backend names to use. These backend names should be backed by a
# unique [CONFIG] group with its options (list value)
#enabled_backends = <None>
# Whether snapshots count against gigabyte quota (boolean value)
#no_snapshot_gb_quota = false
# The full class name of the volume transfer API class (string value)
#transfer_api_class = cinder.transfer.api.API
# The full class name of the volume replication API class (string value)
#replication_api_class = cinder.replication.api.API
# The full class name of the consistencygroup API class (string value)
#consistencygroup_api_class = cinder.consistencygroup.api.API
# The full class name of the group API class (string value)
#group_api_class = cinder.group.api.API
# OpenStack privileged account username. Used for requests to other services
# (such as Nova) that require an account with special rights. (string value)
#os_privileged_user_name = <None>
# Password associated with the OpenStack privileged account. (string value)
#os_privileged_user_password = <None>
# Tenant name associated with the OpenStack privileged account. (string value)
#os_privileged_user_tenant = <None>
# Auth URL associated with the OpenStack privileged account. (string value)
#os_privileged_user_auth_url = <None>
# Multiplier used for weighing free capacity. Negative numbers mean to stack vs
# spread. (floating point value)
#capacity_weight_multiplier = 1.0
# Multiplier used for weighing allocated capacity. Positive numbers mean to
# stack vs spread. (floating point value)
#allocated_capacity_weight_multiplier = -1.0
# IP address of sheep daemon. (string value)
#sheepdog_store_address = 127.0.0.1
# Port of sheep daemon. (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000
# Max size for body of a request (integer value)
#osapi_max_request_body_size = 114688
# Set 512 byte emulation on volume creation; (boolean value)
#sf_emulate_512 = true
# Allow tenants to specify QOS on create (boolean value)
#sf_allow_tenant_qos = false
# Create SolidFire accounts with this prefix. Any string can be used here, but
# the string "hostname" is special and will create a prefix using the cinder
# node hostname (previous default behavior). The default is NO prefix. (string
# value)
#sf_account_prefix = <None>
# Create SolidFire volumes with this prefix. Volume names are of the form
# <sf_volume_prefix><cinder-volume-id>. The default is to use a prefix of
# 'UUID-'. (string value)
#sf_volume_prefix = UUID-
# Account name on the SolidFire Cluster to use as owner of template/cache
# volumes (created if does not exist). (string value)
#sf_template_account_name = openstack-vtemplate
# Create an internal cache of copy of images when a bootable volume is created
# to eliminate fetch from glance and qemu-conversion on subsequent calls.
# (boolean value)
#sf_allow_template_caching = true
# Overrides default cluster SVIP with the one specified. This is required or
# deployments that have implemented the use of VLANs for iSCSI networks in
# their cloud. (string value)
#sf_svip = <None>
# Create an internal mapping of volume IDs and account. Optimizes lookups and
# performance at the expense of memory, very large deployments may want to
# consider setting to False. (boolean value)
#sf_enable_volume_mapping = true
# SolidFire API port. Useful if the device api is behind a proxy on a different
# port. (port value)
# Minimum value: 0
# Maximum value: 65535
#sf_api_port = 443
# Utilize volume access groups on a per-tenant basis. (boolean value)
#sf_enable_vag = false
# Hostname for the CoprHD Instance (string value)
#coprhd_hostname = <None>
# Port for the CoprHD Instance (port value)
# Minimum value: 0
# Maximum value: 65535
#coprhd_port = 4443
# Username for accessing the CoprHD Instance (string value)
#coprhd_username = <None>
# Password for accessing the CoprHD Instance (string value)
#coprhd_password = <None>
# Tenant to utilize within the CoprHD Instance (string value)
#coprhd_tenant = <None>
# Project to utilize within the CoprHD Instance (string value)
#coprhd_project = <None>
# Virtual Array to utilize within the CoprHD Instance (string value)
#coprhd_varray = <None>
# True | False to indicate if the storage array in CoprHD is VMAX or VPLEX
# (boolean value)
#coprhd_emulate_snapshot = false
# The URL of the Swift endpoint (string value)
#backup_swift_url = <None>
# The URL of the Keystone endpoint (string value)
#backup_swift_auth_url = <None>
# Info to match when looking for swift in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
# Only used if backup_swift_url is unset (string value)
#swift_catalog_info = object-store:swift:publicURL
# Info to match when looking for keystone in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
# Only used if backup_swift_auth_url is unset (string value)
#keystone_catalog_info = identity:Identity Service:publicURL
# Swift authentication mechanism (string value)
#backup_swift_auth = per_user
# Swift authentication version. Specify "1" for auth 1.0, or "2" for auth 2.0
# or "3" for auth 3.0 (string value)
#backup_swift_auth_version = 1
# Swift tenant/account name. Required when connecting to an auth 2.0 system
# (string value)
#backup_swift_tenant = <None>
# Swift user domain name. Required when connecting to an auth 3.0 system
# (string value)
#backup_swift_user_domain = <None>
# Swift project domain name. Required when connecting to an auth 3.0 system
# (string value)
#backup_swift_project_domain = <None>
# Swift project/account name. Required when connecting to an auth 3.0 system
# (string value)
#backup_swift_project = <None>
# Swift user name (string value)
#backup_swift_user = <None>
# Swift key for authentication (string value)
#backup_swift_key = <None>
# The default Swift container to use (string value)
#backup_swift_container = volumebackups
# The size in bytes of Swift backup objects (integer value)
#backup_swift_object_size = 52428800
# The size in bytes that changes are tracked for incremental backups.
# backup_swift_object_size has to be multiple of backup_swift_block_size.
# (integer value)
#backup_swift_block_size = 32768
# The number of retries to make for Swift operations (integer value)
#backup_swift_retry_attempts = 3
# The backoff time in seconds between Swift retries (integer value)
#backup_swift_retry_backoff = 2
# Enable or Disable the timer to send the periodic progress notifications to
# Ceilometer when backing up the volume to the Swift backend storage. The
# default value is True to enable the timer. (boolean value)
#backup_swift_enable_progress_timer = true
# Location of the CA certificate file to use for swift client requests. (string
# value)
#backup_swift_ca_cert_file = <None>
# Bypass verification of server certificate when making SSL connection to
# Swift. (boolean value)
#backup_swift_auth_insecure = false
# These values will be used for CloudByte storage's addQos API call. (dict
# value)
#cb_add_qosgroup = graceallowed:false,iops:10,iopscontrol:true,latency:15,memlimit:0,networkspeed:0,throughput:0,tpcontrol:false
# These values will be used for CloudByte storage's createVolume API call.
# (dict value)
#cb_create_volume = blocklength:512B,compression:off,deduplication:off,protocoltype:ISCSI,recordsize:16k,sync:always
# Driver will use this API key to authenticate against the CloudByte storage's
# management interface. (string value)
#cb_apikey = <None>
# CloudByte storage specific account name. This maps to a project name in
# OpenStack. (string value)
#cb_account_name = <None>
# This corresponds to the name of Tenant Storage Machine (TSM) in CloudByte
# storage. A volume will be created in this TSM. (string value)
#cb_tsm_name = <None>
# A retry value in seconds. Will be used by the driver to check if volume
# creation was successful in CloudByte storage. (integer value)
#cb_confirm_volume_create_retry_interval = 5
# Will confirm a successful volume creation in CloudByte storage by making this
# many number of attempts. (integer value)
#cb_confirm_volume_create_retries = 3
# A retry value in seconds. Will be used by the driver to check if volume
# deletion was successful in CloudByte storage. (integer value)
#cb_confirm_volume_delete_retry_interval = 5
# Will confirm a successful volume deletion in CloudByte storage by making this
# many number of attempts. (integer value)
#cb_confirm_volume_delete_retries = 3
# This corresponds to the discovery authentication group in CloudByte storage.
# Chap users are added to this group. Driver uses the first user found for this
# group. Default value is None. (string value)
#cb_auth_group = <None>
# These values will be used for CloudByte storage's updateQosGroup API call.
# (list value)
#cb_update_qos_group = iops,latency,graceallowed
# These values will be used for CloudByte storage's updateFileSystem API call.
# (list value)
#cb_update_file_system = compression,sync,noofcopies,readonly
# Interval, in seconds, between nodes reporting state to datastore (integer
# value)
#report_interval = 10
# Interval, in seconds, between running periodic tasks (integer value)
#periodic_interval = 60
# Range, in seconds, to randomly delay when starting the periodic task
# scheduler to reduce stampeding. (Disable by setting to 0) (integer value)
#periodic_fuzzy_delay = 60
# IP address on which OpenStack Volume API listens (string value)
#osapi_volume_listen = 0.0.0.0
# Port on which OpenStack Volume API listens (port value)
# Minimum value: 0
# Maximum value: 65535
#osapi_volume_listen_port = 8776
# Number of workers for OpenStack Volume API service. The default is equal to
# the number of CPUs available. (integer value)
#osapi_volume_workers = <None>
# Wraps the socket in a SSL context if True is set. A certificate file and key
# file must be specified. (boolean value)
#osapi_volume_use_ssl = false
# The full class name of the compute API class to use (string value)
#compute_api_class = cinder.compute.nova.API
# Number of nodes that should replicate the data. (integer value)
#drbdmanage_redundancy = 1
# Resource deployment completion wait policy. (string value)
#drbdmanage_resource_policy = {"ratio": "0.51", "timeout": "60"}
# Disk options to set on new resources. See http://www.drbd.org/en/doc/users-
# guide-90/re-drbdconf for all the details. (string value)
#drbdmanage_disk_options = {"c-min-rate": "4M"}
# Net options to set on new resources. See http://www.drbd.org/en/doc/users-
# guide-90/re-drbdconf for all the details. (string value)
#drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"}
# Resource options to set on new resources. See http://www.drbd.org/en/doc
# /users-guide-90/re-drbdconf for all the details. (string value)
#drbdmanage_resource_options = {"auto-promote-timeout": "300"}
# Snapshot completion wait policy. (string value)
#drbdmanage_snapshot_policy = {"count": "1", "timeout": "60"}
# Volume resize completion wait policy. (string value)
#drbdmanage_resize_policy = {"timeout": "60"}
# Resource deployment completion wait plugin. (string value)
#drbdmanage_resource_plugin = drbdmanage.plugins.plugins.wait_for.WaitForResource
# Snapshot completion wait plugin. (string value)
#drbdmanage_snapshot_plugin = drbdmanage.plugins.plugins.wait_for.WaitForSnapshot
# Volume resize completion wait plugin. (string value)
#drbdmanage_resize_plugin = drbdmanage.plugins.plugins.wait_for.WaitForVolumeSize
# If set, the c-vol node will receive a useable
# /dev/drbdX device, even if the actual data is stored on
# other nodes only.
# This is useful for debugging, maintenance, and to be
# able to do the iSCSI export from the c-vol node. (boolean
# value)
#drbdmanage_devs_on_controller = true
# Pool or Vdisk name to use for volume creation. (string value)
#dothill_backend_name = A
# linear (for Vdisk) or virtual (for Pool). (string value)
# Allowed values: linear, virtual
#dothill_backend_type = virtual
# DotHill API interface protocol. (string value)
# Allowed values: http, https
#dothill_api_protocol = https
# Whether to verify DotHill array SSL certificate. (boolean value)
#dothill_verify_certificate = false
# DotHill array SSL certificate path. (string value)
#dothill_verify_certificate_path = <None>
# List of comma-separated target iSCSI IP addresses. (list value)
#dothill_iscsi_ips =
# File with the list of available gluster shares (string value)
#glusterfs_shares_config = /etc/cinder/glusterfs_shares
# Base dir containing mount points for gluster shares. (string value)
#glusterfs_mount_point_base = $state_path/mnt
# REST API authorization token. (string value)
#pure_api_token = <None>
# Automatically determine an oversubscription ratio based on the current total
# data reduction values. If used this calculated value will override the
# max_over_subscription_ratio config option. (boolean value)
#pure_automatic_max_oversubscription_ratio = true
# Snapshot replication interval in seconds. (integer value)
#pure_replica_interval_default = 900
# Retain all snapshots on target for this time (in seconds.) (integer value)
#pure_replica_retention_short_term_default = 14400
# Retain how many snapshots for each day. (integer value)
#pure_replica_retention_long_term_per_day_default = 3
# Retain snapshots per day on target for this time (in days.) (integer value)
#pure_replica_retention_long_term_default = 7
# When enabled, all Pure volumes, snapshots, and protection groups will be
# eradicated at the time of deletion in Cinder. Data will NOT be recoverable
# after a delete with this set to True! When disabled, volumes and snapshots
# will go into pending eradication state and can be recovered. (boolean value)
#pure_eradicate_on_delete = false
# ID of the project which will be used as the Cinder internal tenant. (string
# value)
#cinder_internal_tenant_project_id = <None>
# ID of the user to be used in volume operations as the Cinder internal tenant.
# (string value)
#cinder_internal_tenant_user_id = <None>
# The scheduler host manager class to use (string value)
#scheduler_host_manager = cinder.scheduler.host_manager.HostManager
# Maximum number of attempts to schedule a volume (integer value)
#scheduler_max_attempts = 3
# Proxy driver that connects to the IBM Storage Array (string value)
#proxy = storage.proxy.IBMStorageProxy
# Connection type to the IBM Storage Array (string value)
# Allowed values: fibre_channel, iscsi
#connection_type = iscsi
# CHAP authentication mode, effective only for iscsi (disabled|enabled) (string
# value)
# Allowed values: disabled, enabled
#chap = disabled
# List of Management IP addresses (separated by commas) (string value)
#management_ips =
# IP address for connecting to VMware vCenter server. (string value)
#vmware_host_ip = <None>
# Port number for connecting to VMware vCenter server. (port value)
# Minimum value: 0
# Maximum value: 65535
#vmware_host_port = 443
# Username for authenticating with VMware vCenter server. (string value)
#vmware_host_username = <None>
# Password for authenticating with VMware vCenter server. (string value)
#vmware_host_password = <None>
# Optional VIM service WSDL Location e.g http://<server>/vimService.wsdl.
# Optional over-ride to default location for bug work-arounds. (string value)
#vmware_wsdl_location = <None>
# Number of times VMware vCenter server API must be retried upon connection
# related issues. (integer value)
#vmware_api_retry_count = 10
# The interval (in seconds) for polling remote tasks invoked on VMware vCenter
# server. (floating point value)
#vmware_task_poll_interval = 2.0
# Name of the vCenter inventory folder that will contain Cinder volumes. This
# folder will be created under "OpenStack/<project_folder>", where
# project_folder is of format "Project (<volume_project_id>)". (string value)
#vmware_volume_folder = Volumes
# Timeout in seconds for VMDK volume transfer between Cinder and Glance.
# (integer value)
#vmware_image_transfer_timeout_secs = 7200
# Max number of objects to be retrieved per batch. Query results will be
# obtained in batches from the server and not in one shot. Server may still
# limit the count to something less than the configured value. (integer value)
#vmware_max_objects_retrieval = 100
# Optional string specifying the VMware vCenter server version. The driver
# attempts to retrieve the version from VMware vCenter server. Set this
# configuration only if you want to override the vCenter server version.
# (string value)
#vmware_host_version = <None>
# Directory where virtual disks are stored during volume backup and restore.
# (string value)
#vmware_tmp_dir = /tmp
# CA bundle file to use in verifying the vCenter server certificate. (string
# value)
#vmware_ca_file = <None>
# If true, the vCenter server certificate is not verified. If false, then the
# default CA truststore is used for verification. This option is ignored if
# "vmware_ca_file" is set. (boolean value)
#vmware_insecure = false
# Name of a vCenter compute cluster where volumes should be created. (multi
# valued)
#vmware_cluster_name =
# Pool or Vdisk name to use for volume creation. (string value)
#lenovo_backend_name = A
# linear (for VDisk) or virtual (for Pool). (string value)
# Allowed values: linear, virtual
#lenovo_backend_type = virtual
# Lenovo api interface protocol. (string value)
# Allowed values: http, https
#lenovo_api_protocol = https
# Whether to verify Lenovo array SSL certificate. (boolean value)
#lenovo_verify_certificate = false
# Lenovo array SSL certificate path. (string value)
#lenovo_verify_certificate_path = <None>
# List of comma-separated target iSCSI IP addresses. (list value)
#lenovo_iscsi_ips =
# The maximum size in bytes of the files used to hold backups. If the volume
# being backed up exceeds this size, then it will be backed up into multiple
# files.backup_file_size must be a multiple of backup_sha_block_size_bytes.
# (integer value)
#backup_file_size = 1999994880
# The size in bytes that changes are tracked for incremental backups.
# backup_file_size has to be multiple of backup_sha_block_size_bytes. (integer
# value)
#backup_sha_block_size_bytes = 32768
# Enable or Disable the timer to send the periodic progress notifications to
# Ceilometer when backing up the volume to the backend storage. The default
# value is True to enable the timer. (boolean value)
#backup_enable_progress_timer = true
# Path specifying where to store backups. (string value)
#backup_posix_path = $state_path/backup
# Custom directory to use for backups. (string value)
#backup_container = <None>
# REST server port. (string value)
#sio_rest_server_port = 443
# Verify server certificate. (boolean value)
#sio_verify_server_certificate = false
# Server certificate path. (string value)
#sio_server_certificate_path = <None>
# Round up volume capacity. (boolean value)
#sio_round_volume_capacity = true
# Unmap volume before deletion. (boolean value)
#sio_unmap_volume_before_deletion = false
# Protection Domain ID. (string value)
#sio_protection_domain_id = <None>
# Protection Domain name. (string value)
#sio_protection_domain_name = <None>
# Storage Pools. (string value)
#sio_storage_pools = <None>
# Storage Pool name. (string value)
#sio_storage_pool_name = <None>
# Storage Pool ID. (string value)
#sio_storage_pool_id = <None>
# max_over_subscription_ratio setting for the ScaleIO driver. This replaces the
# general max_over_subscription_ratio which has no effect in this
# driver.Maximum value allowed for ScaleIO is 10.0. (floating point value)
#sio_max_over_subscription_ratio = 10.0
# Driver to use for database access (string value)
#db_driver = cinder.db
# Group name to use for creating volumes. Defaults to "group-0". (string value)
#eqlx_group_name = group-0
# Timeout for the Group Manager cli command execution. Default is 30. Note that
# this option is deprecated in favour of "ssh_conn_timeout" as specified in
# cinder/volume/drivers/san/san.py and will be removed in M release. (integer
# value)
#eqlx_cli_timeout = 30
# Maximum retry count for reconnection. Default is 5. (integer value)
# Minimum value: 0
#eqlx_cli_max_retries = 5
# Use CHAP authentication for targets. Note that this option is deprecated in
# favour of "use_chap_auth" as specified in cinder/volume/driver.py and will be
# removed in next release. (boolean value)
#eqlx_use_chap = false
# Existing CHAP account name. Note that this option is deprecated in favour of
# "chap_username" as specified in cinder/volume/driver.py and will be removed
# in next release. (string value)
#eqlx_chap_login = admin
# Password for specified CHAP account name. Note that this option is deprecated
# in favour of "chap_password" as specified in cinder/volume/driver.py and will
# be removed in the next release (string value)
#eqlx_chap_password = password
# Pool in which volumes will be created. Defaults to "default". (string value)
#eqlx_pool = default
# The number of characters in the salt. (integer value)
#volume_transfer_salt_length = 8
# The number of characters in the autogenerated auth key. (integer value)
#volume_transfer_key_length = 16
# Services to be added to the available pool on create (boolean value)
#enable_new_services = true
# Template string to be used to generate volume names (string value)
#volume_name_template = volume-%s
# Template string to be used to generate snapshot names (string value)
#snapshot_name_template = snapshot-%s
# Template string to be used to generate backup names (string value)
#backup_name_template = backup-%s
# Multiplier used for weighing volume number. Negative numbers mean to spread
# vs stack. (floating point value)
#volume_number_multiplier = -1.0
# RPC port to connect to Coho Data MicroArray (integer value)
#coho_rpc_port = 2049
# Path or URL to Scality SOFS configuration file (string value)
#scality_sofs_config = <None>
# Base dir where Scality SOFS shall be mounted (string value)
#scality_sofs_mount_point = $state_path/scality
# Path from Scality SOFS root to volume dir (string value)
#scality_sofs_volume_dir = cinder/volumes
# Default storage pool for volumes. (integer value)
#ise_storage_pool = 1
# Raid level for ISE volumes. (integer value)
#ise_raid = 1
# Number of retries (per port) when establishing connection to ISE management
# port. (integer value)
#ise_connection_retries = 5
# Interval (secs) between retries. (integer value)
#ise_retry_interval = 1
# Number on retries to get completion status after issuing a command to ISE.
# (integer value)
#ise_completion_retries = 30
# Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
# (boolean value)
#storwize_svc_multipath_enabled = false
# FSS pool id in which FalconStor volumes are stored. (integer value)
#fss_pool =
# Enable HTTP debugging to FSS (boolean value)
#fss_debug = false
# FSS additional retry list, separate by ; (string value)
#additional_retry_list =
# Storage pool name. (string value)
#zfssa_pool = <None>
# Project name. (string value)
#zfssa_project = <None>
# Block size. (string value)
# Allowed values: 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k
#zfssa_lun_volblocksize = 8k
# Flag to enable sparse (thin-provisioned): True, False. (boolean value)
#zfssa_lun_sparse = false
# Data compression. (string value)
# Allowed values: off, lzjb, gzip-2, gzip, gzip-9
#zfssa_lun_compression = off
# Synchronous write bias. (string value)
# Allowed values: latency, throughput
#zfssa_lun_logbias = latency
# iSCSI initiator group. (string value)
#zfssa_initiator_group =
# iSCSI initiator IQNs. (comma separated) (string value)
#zfssa_initiator =
# iSCSI initiator CHAP user (name). (string value)
#zfssa_initiator_user =
# Secret of the iSCSI initiator CHAP user. (string value)
#zfssa_initiator_password =
# iSCSI initiators configuration. (string value)
#zfssa_initiator_config =
# iSCSI target group name. (string value)
#zfssa_target_group = tgt-grp
# iSCSI target CHAP user (name). (string value)
#zfssa_target_user =
# Secret of the iSCSI target CHAP user. (string value)
#zfssa_target_password =
# iSCSI target portal (Data-IP:Port, w.x.y.z:3260). (string value)
#zfssa_target_portal = <None>
# Network interfaces of iSCSI targets. (comma separated) (string value)
#zfssa_target_interfaces = <None>
# REST connection timeout. (seconds) (integer value)
#zfssa_rest_timeout = <None>
# IP address used for replication data. (maybe the same as data ip) (string
# value)
#zfssa_replication_ip =
# Flag to enable local caching: True, False. (boolean value)
#zfssa_enable_local_cache = true
# Name of ZFSSA project where cache volumes are stored. (string value)
#zfssa_cache_project = os-cinder-cache
# Driver policy for volume manage. (string value)
# Allowed values: loose, strict
#zfssa_manage_policy = loose
# Number of times to attempt to run flakey shell commands (integer value)
#num_shell_tries = 3
# The percentage of backend capacity is reserved (integer value)
# Minimum value: 0
# Maximum value: 100
#reserved_percentage = 0
# Prefix for iSCSI volumes (string value)
#iscsi_target_prefix = iqn.2010-10.org.openstack:
# The IP address that the iSCSI daemon is listening on (string value)
#iscsi_ip_address = $my_ip
# The list of secondary IP addresses of the iSCSI daemon (list value)
#iscsi_secondary_ip_addresses =
# The port that the iSCSI daemon is listening on (port value)
# Minimum value: 0
# Maximum value: 65535
#iscsi_port = 3260
# The maximum number of times to rescan targets to find volume (integer value)
#num_volume_device_scan_tries = 3
# The backend name for a given driver implementation (string value)
#volume_backend_name = <None>
# Do we attach/detach volumes in cinder using multipath for volume to image and
# image to volume transfers? (boolean value)
#use_multipath_for_image_xfer = false
# If this is set to True, attachment of volumes for image transfer will be
# aborted when multipathd is not running. Otherwise, it will fallback to single
# path. (boolean value)
#enforce_multipath_for_image_xfer = false
# Method used to wipe old volumes (string value)
# Allowed values: none, zero, shred
#volume_clear = zero
# Size in MiB to wipe at start of old volumes. 1024 MiBat max. 0 => all
# (integer value)
# Maximum value: 1024
#volume_clear_size = 0
# The flag to pass to ionice to alter the i/o priority of the process used to
# zero a volume after deletion, for example "-c3" for idle only priority.
# (string value)
#volume_clear_ionice = <None>
# iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO
# iSCSI support, scstadmin for SCST target support, ietadm for iSCSI Enterprise
# Target, iscsictl for Chelsio iSCSI Target or fake for testing. (string value)
# Allowed values: tgtadm, lioadm, scstadmin, iscsictl, ietadm, fake
#iscsi_helper = tgtadm
# Volume configuration file storage directory (string value)
#volumes_dir = $state_path/volumes
# IET configuration file (string value)
#iet_conf = /etc/iet/ietd.conf
# Chiscsi (CXT) global defaults configuration file (string value)
#chiscsi_conf = /etc/chelsio-iscsi/chiscsi.conf
# Sets the behavior of the iSCSI target to either perform blockio or fileio
# optionally, auto can be set and Cinder will autodetect type of backing device
# (string value)
# Allowed values: blockio, fileio, auto
#iscsi_iotype = fileio
# The default block size used when copying/clearing volumes (string value)
#volume_dd_blocksize = 1M
# The blkio cgroup name to be used to limit bandwidth of volume copy (string
# value)
#volume_copy_blkio_cgroup_name = cinder-volume-copy
# The upper limit of bandwidth of volume copy. 0 => unlimited (integer value)
#volume_copy_bps_limit = 0
# Sets the behavior of the iSCSI target to either perform write-back(on) or
# write-through(off). This parameter is valid if iscsi_helper is set to tgtadm.
# (string value)
# Allowed values: on, off
#iscsi_write_cache = on
# Sets the target-specific flags for the iSCSI target. Only used for tgtadm to
# specify backing device flags using bsoflags option. The specified string is
# passed as is to the underlying tool. (string value)
#iscsi_target_flags =
# Determines the iSCSI protocol for new iSCSI volumes, created with tgtadm or
# lioadm target helpers. In order to enable RDMA, this parameter should be set
# with the value "iser". The supported iSCSI protocol values are "iscsi" and
# "iser". (string value)
# Allowed values: iscsi, iser
#iscsi_protocol = iscsi
# The path to the client certificate key for verification, if the driver
# supports it. (string value)
#driver_client_cert_key = <None>
# The path to the client certificate for verification, if the driver supports
# it. (string value)
#driver_client_cert = <None>
# Tell driver to use SSL for connection to backend storage if the driver
# supports it. (boolean value)
#driver_use_ssl = false
# Float representation of the over subscription ratio when thin provisioning is
# involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times
# of the total physical capacity. If the ratio is 10.5, it means provisioned
# capacity can be 10.5 times of the total physical capacity. A ratio of 1.0
# means provisioned capacity cannot exceed the total physical capacity. The
# ratio has to be a minimum of 1.0. (floating point value)
#max_over_subscription_ratio = 20.0
# Certain ISCSI targets have predefined target names, SCST target driver uses
# this name. (string value)
#scst_target_iqn_name = <None>
# SCST target implementation can choose from multiple SCST target drivers.
# (string value)
#scst_target_driver = iscsi
# Option to enable/disable CHAP authentication for targets. (boolean value)
# Deprecated group/name - [DEFAULT]/eqlx_use_chap
#use_chap_auth = false
# CHAP user name. (string value)
# Deprecated group/name - [DEFAULT]/eqlx_chap_login
#chap_username =
# Password for specified CHAP account name. (string value)
# Deprecated group/name - [DEFAULT]/eqlx_chap_password
#chap_password =
# Namespace for driver private data values to be saved in. (string value)
#driver_data_namespace = <None>
# String representation for an equation that will be used to filter hosts. Only
# used when the driver filter is set to be used by the Cinder scheduler.
# (string value)
#filter_function = <None>
# String representation for an equation that will be used to determine the
# goodness of a host. Only used when using the goodness weigher is set to be
# used by the Cinder scheduler. (string value)
#goodness_function = <None>
# If set to True the http client will validate the SSL certificate of the
# backend endpoint. (boolean value)
#driver_ssl_cert_verify = false
# Can be used to specify a non default path to a CA_BUNDLE file or directory
# with certificates of trusted CAs, which will be used to validate the backend
# (string value)
#driver_ssl_cert_path = <None>
# List of options that control which trace info is written to the DEBUG log
# level to assist developers. Valid values are method and api. (list value)
#trace_flags = <None>
# Multi opt of dictionaries to represent a replication target device. This
# option may be specified multiple times in a single config section to specify
# multiple replication target devices. Each entry takes the standard dict
# config form: replication_device =
# target_device_id:<required>,key1:value1,key2:value2... (dict value)
#replication_device = <None>
# If set to True, upload-to-image in raw format will create a cloned volume and
# register its location to the image service, instead of uploading the volume
# content. The cinder backend and locations support must be enabled in the
# image service, and glance_api_version must be set to 2. (boolean value)
#image_upload_use_cinder_backend = false
# If set to True, the image volume created by upload-to-image will be placed in
# the internal tenant. Otherwise, the image volume is created in the current
# context's tenant. (boolean value)
#image_upload_use_internal_tenant = false
# Enable the image volume cache for this backend. (boolean value)
#image_volume_cache_enabled = false
# Max size of the image volume cache for this backend in GB. 0 => unlimited.
# (integer value)
#image_volume_cache_max_size_gb = 0
# Max number of entries allowed in the image volume cache. 0 => unlimited.
# (integer value)
#image_volume_cache_max_count = 0
# Report to clients of Cinder that the backend supports discard (aka.
# trim/unmap). This will not actually change the behavior of the backend or the
# client directly, it will only notify that it can be used. (boolean value)
#report_discard_supported = false
# Protocol for transferring data between host and storage back-end. (string
# value)
# Allowed values: iscsi, fc
#storage_protocol = iscsi
# If this is set to True, the backup_use_temp_snapshot path will be used during
# the backup. Otherwise, it will use backup_use_temp_volume path. (boolean
# value)
#backup_use_temp_snapshot = false
# Set this to True when you want to allow an unsupported driver to start.
# Drivers that haven't maintained a working CI system and testing are marked as
# unsupported until CI is working again. This also marks a driver as
# deprecated and may be removed in the next release. (boolean value)
#enable_unsupported_driver = false
# The maximum number of times to rescan iSER targetto find volume (integer
# value)
#num_iser_scan_tries = 3
# Prefix for iSER volumes (string value)
#iser_target_prefix = iqn.2010-10.org.openstack:
# The IP address that the iSER daemon is listening on (string value)
#iser_ip_address = $my_ip
# The port that the iSER daemon is listening on (port value)
# Minimum value: 0
# Maximum value: 65535
#iser_port = 3260
# The name of the iSER target user-land tool to use (string value)
#iser_helper = tgtadm
# Public url to use for versions endpoint. The default is None, which will use
# the request's host_url attribute to populate the URL base. If Cinder is
# operating behind a proxy, you will want to change this to represent the
# proxy's URL. (string value)
#public_endpoint = <None>
# Nimble Controller pool name (string value)
#nimble_pool_name = default
# Nimble Subnet Label (string value)
#nimble_subnet_label = *
# Path to store VHD backed volumes (string value)
#windows_iscsi_lun_path = C:\iSCSIVirtualDisks
# VNX authentication scope type. By default, the value is global. (string
# value)
#storage_vnx_authentication_type = global
# Directory path that contains the VNX security file. Make sure the security
# file is generated first. (string value)
#storage_vnx_security_file_dir = <None>
# Naviseccli Path. (string value)
#naviseccli_path = <None>
# Comma-separated list of storage pool names to be used. (list value)
#storage_vnx_pool_names = <None>
# Default timeout for CLI operations in minutes. For example, LUN migration is
# a typical long running operation, which depends on the LUN size and the load
# of the array. An upper bound in the specific deployment can be set to avoid
# unnecessary long wait. By default, it is 365 days long. (integer value)
#default_timeout = 31536000
# Default max number of LUNs in a storage group. By default, the value is 255.
# (integer value)
#max_luns_per_storage_group = 255
# To destroy storage group when the last LUN is removed from it. By default,
# the value is False. (boolean value)
#destroy_empty_storage_group = false
# Mapping between hostname and its iSCSI initiator IP addresses. (string value)
#iscsi_initiators = <None>
# Comma separated iSCSI or FC ports to be used in Nova or Cinder. (list value)
#io_port_list = <None>
# Automatically register initiators. By default, the value is False. (boolean
# value)
#initiator_auto_registration = false
# Automatically deregister initiators after the related storage group is
# destroyed. By default, the value is False. (boolean value)
#initiator_auto_deregistration = false
# Report free_capacity_gb as 0 when the limit to maximum number of pool LUNs is
# reached. By default, the value is False. (boolean value)
#check_max_pool_luns_threshold = false
# Delete a LUN even if it is in Storage Groups. By default, the value is False.
# (boolean value)
#force_delete_lun_in_storagegroup = false
# Force LUN creation even if the full threshold of pool is reached. By default,
# the value is False. (boolean value)
#ignore_pool_full_threshold = false
# Pool or Vdisk name to use for volume creation. (string value)
#hpmsa_backend_name = A
# linear (for Vdisk) or virtual (for Pool). (string value)
# Allowed values: linear, virtual
#hpmsa_backend_type = virtual
# HPMSA API interface protocol. (string value)
# Allowed values: http, https
#hpmsa_api_protocol = https
# Whether to verify HPMSA array SSL certificate. (boolean value)
#hpmsa_verify_certificate = false
# HPMSA array SSL certificate path. (string value)
#hpmsa_verify_certificate_path = <None>
# List of comma-separated target iSCSI IP addresses. (list value)
#hpmsa_iscsi_ips =
# A list of url schemes that can be downloaded directly via the direct_url.
# Currently supported schemes: [file]. (list value)
#allowed_direct_url_schemes =
# Info to match when looking for glance in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type> -
# Only used if glance_api_servers are not provided. (string value)
#glance_catalog_info = image:glance:publicURL
# Default core properties of image (list value)
#glance_core_properties = checksum,container_format,disk_format,image_name,image_id,min_disk,min_ram,name,size
# HPE LeftHand WSAPI Server Url like https://<LeftHand ip>:8081/lhos (string
# value)
# Deprecated group/name - [DEFAULT]/hplefthand_api_url
#hpelefthand_api_url = <None>
# HPE LeftHand Super user username (string value)
# Deprecated group/name - [DEFAULT]/hplefthand_username
#hpelefthand_username = <None>
# HPE LeftHand Super user password (string value)
# Deprecated group/name - [DEFAULT]/hplefthand_password
#hpelefthand_password = <None>
# HPE LeftHand cluster name (string value)
# Deprecated group/name - [DEFAULT]/hplefthand_clustername
#hpelefthand_clustername = <None>
# Configure CHAP authentication for iSCSI connections (Default: Disabled)
# (boolean value)
# Deprecated group/name - [DEFAULT]/hplefthand_iscsi_chap_enabled
#hpelefthand_iscsi_chap_enabled = false
# Enable HTTP debugging to LeftHand (boolean value)
# Deprecated group/name - [DEFAULT]/hplefthand_debug
#hpelefthand_debug = false
# Port number of SSH service. (port value)
# Minimum value: 0
# Maximum value: 65535
#hpelefthand_ssh_port = 16022
# Name for the VG that will contain exported volumes (string value)
#volume_group = cinder-volumes
# If >0, create LVs with multiple mirrors. Note that this requires lvm_mirrors
# + 2 PVs with available space (integer value)
#lvm_mirrors = 0
# Type of LVM volumes to deploy; (default, thin, or auto). Auto defaults to
# thin if thin is supported. (string value)
# Allowed values: default, thin, auto
#lvm_type = default
# LVM conf file to use for the LVM driver in Cinder; this setting is ignored if
# the specified file does not exist (You can also specify 'None' to not use a
# conf file even if one exists). (string value)
#lvm_conf_file = /etc/cinder/lvm.conf
# max_over_subscription_ratio setting for the LVM driver. If set, this takes
# precedence over the general max_over_subscription_ratio option. If None, the
# general option is used. (floating point value)
#lvm_max_over_subscription_ratio = 1.0
# use this file for cinder emc plugin config data (string value)
#cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
# IP address or Hostname of NAS system. (string value)
# Deprecated group/name - [DEFAULT]/nas_ip
#nas_host =
# User name to connect to NAS system. (string value)
#nas_login = admin
# Password to connect to NAS system. (string value)
#nas_password =
# SSH port to use to connect to NAS system. (port value)
# Minimum value: 0
# Maximum value: 65535
#nas_ssh_port = 22
# Filename of private key to use for SSH authentication. (string value)
#nas_private_key =
# Allow network-attached storage systems to operate in a secure environment
# where root level access is not permitted. If set to False, access is as the
# root user and insecure. If set to True, access is not as root. If set to
# auto, a check is done to determine if this is a new installation: True is
# used if so, otherwise False. Default is auto. (string value)
#nas_secure_file_operations = auto
# Set more secure file permissions on network-attached storage volume files to
# restrict broad other/world access. If set to False, volumes are created with
# open permissions. If set to True, volumes are created with permissions for
# the cinder user and group (660). If set to auto, a check is done to determine
# if this is a new installation: True is used if so, otherwise False. Default
# is auto. (string value)
#nas_secure_file_permissions = auto
# Path to the share to use for storing Cinder volumes. For example:
# "/srv/export1" for an NFS server export available at 10.0.5.10:/srv/export1 .
# (string value)
#nas_share_path =
# Options used to mount the storage backend file system where Cinder volumes
# are stored. (string value)
#nas_mount_options = <None>
# Provisioning type that will be used when creating volumes. (string value)
# Allowed values: thin, thick
# Deprecated group/name - [DEFAULT]/glusterfs_sparsed_volumes
# Deprecated group/name - [DEFAULT]/glusterfs_qcow2_volumes
#nas_volume_prov_type = thin
# XMS cluster id in multi-cluster environment (string value)
#xtremio_cluster_name =
# Number of retries in case array is busy (integer value)
#xtremio_array_busy_retry_count = 5
# Interval between retries in case array is busy (integer value)
#xtremio_array_busy_retry_interval = 5
# Number of volumes created from each cached glance image (integer value)
#xtremio_volumes_per_glance_cache = 100
# The GCS bucket to use. (string value)
#backup_gcs_bucket = <None>
# The size in bytes of GCS backup objects. (integer value)
#backup_gcs_object_size = 52428800
# The size in bytes that changes are tracked for incremental backups.
# backup_gcs_object_size has to be multiple of backup_gcs_block_size. (integer
# value)
#backup_gcs_block_size = 32768
# GCS object will be downloaded in chunks of bytes. (integer value)
#backup_gcs_reader_chunk_size = 2097152
# GCS object will be uploaded in chunks of bytes. Pass in a value of -1 if the
# file is to be uploaded as a single chunk. (integer value)
#backup_gcs_writer_chunk_size = 2097152
# Number of times to retry. (integer value)
#backup_gcs_num_retries = 3
# List of GCS error codes. (list value)
#backup_gcs_retry_error_codes = 429
# Location of GCS bucket. (string value)
#backup_gcs_bucket_location = US
# Storage class of GCS bucket. (string value)
#backup_gcs_storage_class = NEARLINE
# Absolute path of GCS service account credential file. (string value)
#backup_gcs_credential_file = <None>
# Owner project id for GCS bucket. (string value)
#backup_gcs_project_id = <None>
# Http user-agent string for gcs api. (string value)
#backup_gcs_user_agent = gcscinder
# Enable or Disable the timer to send the periodic progress notifications to
# Ceilometer when backing up the volume to the GCS backend storage. The default
# value is True to enable the timer. (boolean value)
#backup_gcs_enable_progress_timer = true
# URL for http proxy access. (uri value)
#backup_gcs_proxy_url = <None>
# Treat X-Forwarded-For as the canonical remote address. Only enable this if
# you have a sanitizing proxy. (boolean value)
#use_forwarded_for = false
# Serial number of storage system (string value)
#hitachi_serial_number = <None>
# Name of an array unit (string value)
#hitachi_unit_name = <None>
# Pool ID of storage system (integer value)
#hitachi_pool_id = <None>
# Thin pool ID of storage system (integer value)
#hitachi_thin_pool_id = <None>
# Range of logical device of storage system (string value)
#hitachi_ldev_range = <None>
# Default copy method of storage system (string value)
#hitachi_default_copy_method = FULL
# Copy speed of storage system (integer value)
#hitachi_copy_speed = 3
# Interval to check copy (integer value)
#hitachi_copy_check_interval = 3
# Interval to check copy asynchronously (integer value)
#hitachi_async_copy_check_interval = 10
# Control port names for HostGroup or iSCSI Target (string value)
#hitachi_target_ports = <None>
# Range of group number (string value)
#hitachi_group_range = <None>
# Request for creating HostGroup or iSCSI Target (boolean value)
#hitachi_group_request = false
# Infortrend raid pool name list. It is separated with comma. (string value)
#infortrend_pools_name =
# The Infortrend CLI absolute path. By default, it is at
# /opt/bin/Infortrend/raidcmd_ESDS10.jar (string value)
#infortrend_cli_path = /opt/bin/Infortrend/raidcmd_ESDS10.jar
# Maximum retry time for cli. Default is 5. (integer value)
#infortrend_cli_max_retries = 5
# Default timeout for CLI copy operations in minutes. Support: migrate volume,
# create cloned volume and create volume from snapshot. By Default, it is 30
# minutes. (integer value)
#infortrend_cli_timeout = 30
# Infortrend raid channel ID list on Slot A for OpenStack usage. It is
# separated with comma. By default, it is the channel 0~7. (string value)
#infortrend_slots_a_channels_id = 0,1,2,3,4,5,6,7
# Infortrend raid channel ID list on Slot B for OpenStack usage. It is
# separated with comma. By default, it is the channel 0~7. (string value)
#infortrend_slots_b_channels_id = 0,1,2,3,4,5,6,7
# Let the volume use specific provisioning. By default, it is the full
# provisioning. The supported options are full or thin. (string value)
#infortrend_provisioning = full
# Let the volume use specific tiering level. By default, it is the level 0. The
# supported levels are 0,2,3,4. (string value)
#infortrend_tiering = 0
# DEPRECATED: Legacy configuration file for HNAS iSCSI Cinder plugin. This is
# not needed if you fill all configuration on cinder.conf (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi_conf.xml
# Whether the chap authentication is enabled in the iSCSI target or not.
# (boolean value)
#hnas_chap_enabled = true
# Service 0 iSCSI IP (IP address value)
#hnas_svc0_iscsi_ip = <None>
# Service 1 iSCSI IP (IP address value)
#hnas_svc1_iscsi_ip = <None>
# Service 2 iSCSI IP (IP address value)
#hnas_svc2_iscsi_ip = <None>
# Service 3 iSCSI IP (IP address value)
#hnas_svc3_iscsi_ip = <None>
# The name of ceph cluster (string value)
#rbd_cluster_name = ceph
# The RADOS pool where rbd volumes are stored (string value)
#rbd_pool = rbd
# The RADOS client name for accessing rbd volumes - only set when using cephx
# authentication (string value)
#rbd_user = <None>
# Path to the ceph configuration file (string value)
#rbd_ceph_conf =
# Flatten volumes created from snapshots to remove dependency from volume to
# snapshot (boolean value)
#rbd_flatten_volume_from_snapshot = false
# The libvirt uuid of the secret for the rbd_user volumes (string value)
#rbd_secret_uuid = <None>
# Directory where temporary image files are stored when the volume driver does
# not write them directly to the volume. Warning: this option is now
# deprecated, please use image_conversion_dir instead. (string value)
#volume_tmp_dir = <None>
# Maximum number of nested volume clones that are taken before a flatten
# occurs. Set to 0 to disable cloning. (integer value)
#rbd_max_clone_depth = 5
# Volumes will be chunked into objects of this size (in megabytes). (integer
# value)
#rbd_store_chunk_size = 4
# Timeout value (in seconds) used when connecting to ceph cluster. If value <
# 0, no timeout is set and default librados value is used. (integer value)
#rados_connect_timeout = -1
# Number of retries if connection to ceph cluster failed. (integer value)
#rados_connection_retries = 3
# Interval value (in seconds) between connection retries to ceph cluster.
# (integer value)
#rados_connection_interval = 5
# The hostname (or IP address) for the storage system (string value)
#tintri_server_hostname = <None>
# User name for the storage system (string value)
#tintri_server_username = <None>
# Password for the storage system (string value)
#tintri_server_password = <None>
# API version for the storage system (string value)
#tintri_api_version = v310
# Delete unused image snapshots older than mentioned days (integer value)
#tintri_image_cache_expiry_days = 30
# Path to image nfs shares file (string value)
#tintri_image_shares_config = <None>
# Backup services use same backend. (boolean value)
#backup_use_same_host = false
# Instance numbers for HORCM (string value)
#hitachi_horcm_numbers = 200,201
# Username of storage system for HORCM (string value)
#hitachi_horcm_user = <None>
# Password of storage system for HORCM (string value)
#hitachi_horcm_password = <None>
# Add to HORCM configuration (boolean value)
#hitachi_horcm_add_conf = true
# Timeout until a resource lock is released, in seconds. The value must be
# between 0 and 7200. (integer value)
#hitachi_horcm_resource_lock_timeout = 600
# Driver to use for backups. (string value)
#backup_driver = cinder.backup.drivers.swift
# Offload pending backup delete during backup service startup. If false, the
# backup service will remain down until all pending backups are deleted.
# (boolean value)
#backup_service_inithost_offload = true
# Comma separated list of storage system storage pools for volumes. (list
# value)
#storwize_svc_volpool_name = volpool
# Storage system space-efficiency parameter for volumes (percentage) (integer
# value)
# Minimum value: -1
# Maximum value: 100
#storwize_svc_vol_rsize = 2
# Storage system threshold for volume capacity warnings (percentage) (integer
# value)
# Minimum value: -1
# Maximum value: 100
#storwize_svc_vol_warning = 0
# Storage system autoexpand parameter for volumes (True/False) (boolean value)
#storwize_svc_vol_autoexpand = true
# Storage system grain size parameter for volumes (32/64/128/256) (integer
# value)
#storwize_svc_vol_grainsize = 256
# Storage system compression option for volumes (boolean value)
#storwize_svc_vol_compression = false
# Enable Easy Tier for volumes (boolean value)
#storwize_svc_vol_easytier = true
# The I/O group in which to allocate volumes (integer value)
#storwize_svc_vol_iogrp = 0
# Maximum number of seconds to wait for FlashCopy to be prepared. (integer
# value)
# Minimum value: 1
# Maximum value: 600
#storwize_svc_flashcopy_timeout = 120
# DEPRECATED: This option no longer has any affect. It is deprecated and will
# be removed in the next release. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#storwize_svc_multihostmap_enabled = true
# Allow tenants to specify QOS on create (boolean value)
#storwize_svc_allow_tenant_qos = false
# If operating in stretched cluster mode, specify the name of the pool in which
# mirrored copies are stored.Example: "pool2" (string value)
#storwize_svc_stretched_cluster_partner = <None>
# Specifies secondary management IP or hostname to be used if san_ip is invalid
# or becomes inaccessible. (string value)
#storwize_san_secondary_ip = <None>
# Specifies that the volume not be formatted during creation. (boolean value)
#storwize_svc_vol_nofmtdisk = false
# Specifies the Storwize FlashCopy copy rate to be used when creating a full
# volume copy. The default is rate is 50, and the valid rates are 1-100.
# (integer value)
# Minimum value: 1
# Maximum value: 100
#storwize_svc_flashcopy_rate = 50
# Request for FC Zone creating HostGroup (boolean value)
#hitachi_zoning_request = false
# Number of volumes allowed per project (integer value)
#quota_volumes = 10
# Number of volume snapshots allowed per project (integer value)
#quota_snapshots = 10
# Number of consistencygroups allowed per project (integer value)
#quota_consistencygroups = 10
# Number of groups allowed per project (integer value)
#quota_groups = 10
# Total amount of storage, in gigabytes, allowed for volumes and snapshots per
# project (integer value)
#quota_gigabytes = 1000
# Number of volume backups allowed per project (integer value)
#quota_backups = 10
# Total amount of storage, in gigabytes, allowed for backups per project
# (integer value)
#quota_backup_gigabytes = 1000
# Number of seconds until a reservation expires (integer value)
#reservation_expire = 86400
# Count of reservations until usage is refreshed (integer value)
#until_refresh = 0
# Number of seconds between subsequent usage refreshes (integer value)
#max_age = 0
# Default driver to use for quota checks (string value)
#quota_driver = cinder.quota.DbQuotaDriver
# Enables or disables use of default quota class with default quota. (boolean
# value)
#use_default_quota_class = true
# Max size allowed per volume, in gigabytes (integer value)
#per_volume_size_limit = -1
# The configuration file for the Cinder Huawei driver. (string value)
#cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
# The remote device hypermetro will use. (string value)
#hypermetro_devices = <None>
# The remote metro device san user. (string value)
#metro_san_user = <None>
# The remote metro device san password. (string value)
#metro_san_password = <None>
# The remote metro device domain name. (string value)
#metro_domain_name = <None>
# The remote metro device request url. (string value)
#metro_san_address = <None>
# The remote metro device pool names. (string value)
#metro_storage_pools = <None>
# Volume on Synology storage to be used for creating lun. (string value)
#synology_pool_name =
# Management port for Synology storage. (port value)
# Minimum value: 0
# Maximum value: 65535
#synology_admin_port = 5000
# Administrator of Synology storage. (string value)
#synology_username = admin
# Password of administrator for logging in Synology storage. (string value)
#synology_password =
# Do certificate validation or not if $driver_use_ssl is True (boolean value)
#synology_ssl_verify = true
# One time password of administrator for logging in Synology storage if OTP is
# enabled. (string value)
#synology_one_time_pass = <None>
# Device id for skip one time password check for logging in Synology storage if
# OTP is enabled. (string value)
#synology_device_id = <None>
# Storage Center System Serial Number (integer value)
#dell_sc_ssn = 64702
# Dell API port (port value)
# Minimum value: 0
# Maximum value: 65535
#dell_sc_api_port = 3033
# Name of the server folder to use on the Storage Center (string value)
#dell_sc_server_folder = openstack
# Name of the volume folder to use on the Storage Center (string value)
#dell_sc_volume_folder = openstack
# Enable HTTPS SC certificate verification (boolean value)
#dell_sc_verify_cert = false
# IP address of secondary DSM controller (string value)
#secondary_san_ip =
# Secondary DSM user name (string value)
#secondary_san_login = Admin
# Secondary DSM user password name (string value)
#secondary_san_password =
# Secondary Dell API port (port value)
# Minimum value: 0
# Maximum value: 65535
#secondary_sc_api_port = 3033
# Domain IP to be excluded from iSCSI returns. (IP address value)
#excluded_domain_ip = <None>
# Server OS type to use when creating a new server on the Storage Center.
# (string value)
#dell_server_os = Red Hat Linux 6.x
# Which filter class names to use for filtering hosts when not specified in the
# request. (list value)
#scheduler_default_filters = AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter
# Which weigher class names to use for weighing hosts. (list value)
#scheduler_default_weighers = CapacityWeigher
# Which handler to use for selecting the host/pool after weighing (string
# value)
#scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler
# Default scheduler driver to use (string value)
#scheduler_driver = cinder.scheduler.filter_scheduler.FilterScheduler
# Base dir containing mount point for NFS share. (string value)
#backup_mount_point_base = $state_path/backup_mount
# NFS share in hostname:path, ipv4addr:path, or "[ipv6addr]:path" format.
# (string value)
#backup_share = <None>
# Mount options passed to the NFS client. See NFS man page for details. (string
# value)
#backup_mount_options = <None>
# IP address/hostname of Blockbridge API. (string value)
#blockbridge_api_host = <None>
# Override HTTPS port to connect to Blockbridge API server. (integer value)
#blockbridge_api_port = <None>
# Blockbridge API authentication scheme (token or password) (string value)
# Allowed values: token, password
#blockbridge_auth_scheme = token
# Blockbridge API token (for auth scheme 'token') (string value)
#blockbridge_auth_token = <None>
# Blockbridge API user (for auth scheme 'password') (string value)
#blockbridge_auth_user = <None>
# Blockbridge API password (for auth scheme 'password') (string value)
#blockbridge_auth_password = <None>
# Defines the set of exposed pools and their associated backend query strings
# (dict value)
#blockbridge_pools = OpenStack:+openstack
# Default pool name if unspecified. (string value)
#blockbridge_default_pool = <None>
# Absolute path to scheduler configuration JSON file. (string value)
#scheduler_json_config_location =
# Data path IP address (string value)
#zfssa_data_ip = <None>
# HTTPS port number (string value)
#zfssa_https_port = 443
# Options to be passed while mounting share over nfs (string value)
#zfssa_nfs_mount_options =
# Storage pool name. (string value)
#zfssa_nfs_pool =
# Project name. (string value)
#zfssa_nfs_project = NFSProject
# Share name. (string value)
#zfssa_nfs_share = nfs_share
# Data compression. (string value)
# Allowed values: off, lzjb, gzip-2, gzip, gzip-9
#zfssa_nfs_share_compression = off
# Synchronous write bias-latency, throughput. (string value)
# Allowed values: latency, throughput
#zfssa_nfs_share_logbias = latency
# Name of directory inside zfssa_nfs_share where cache volumes are stored.
# (string value)
#zfssa_cache_directory = os-cinder-cache
# The flag of thin storage allocation. (boolean value)
#dsware_isthin = false
# Fusionstorage manager ip addr for cinder-volume. (string value)
#dsware_manager =
# Fusionstorage agent ip addr range. (string value)
#fusionstorageagent =
# Pool type, like sata-2copy. (string value)
#pool_type = default
# Pool id permit to use. (list value)
#pool_id_filter =
# Create clone volume timeout. (integer value)
#clone_volume_timeout = 680
# DEPRECATED: If volume-type name contains this substring nodedup volume will
# be created, otherwise dedup volume wil be created. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option is deprecated in favour of 'kaminario:thin_prov_type' in
# extra-specs and will be removed in the next release.
#kaminario_nodedup_substring = K2-nodedup
# The IP of DMS client socket server (IP address value)
#disco_client = 127.0.0.1
# The port to connect DMS client socket server (port value)
# Minimum value: 0
# Maximum value: 65535
#disco_client_port = 9898
# Path to the wsdl file to communicate with DISCO request manager (string
# value)
#disco_wsdl_path = /etc/cinder/DISCOService.wsdl
# Prefix before volume name to differentiate DISCO volume created through
# openstack and the other ones (string value)
#volume_name_prefix = openstack-
# How long we check whether a snapshot is finished before we give up (integer
# value)
#snapshot_check_timeout = 3600
# How long we check whether a restore is finished before we give up (integer
# value)
#restore_check_timeout = 3600
# How long we check whether a clone is finished before we give up (integer
# value)
#clone_check_timeout = 3600
# How long we wait before retrying to get an item detail (integer value)
#retry_interval = 1
# Space network name to use for data transfer (string value)
#hgst_net = Net 1 (IPv4)
# Comma separated list of Space storage servers:devices. ex:
# os1_stor:gbd0,os2_stor:gbd0 (string value)
#hgst_storage_servers = os:gbd0
# Should spaces be redundantly stored (1/0) (string value)
#hgst_redundancy = 0
# User to own created spaces (string value)
#hgst_space_user = root
# Group to own created spaces (string value)
#hgst_space_group = disk
# UNIX mode for created spaces (string value)
#hgst_space_mode = 0600
# message minimum life in seconds. (integer value)
#message_ttl = 2592000
# Directory used for temporary storage during image conversion (string value)
#image_conversion_dir = $state_path/conversion
# Match this value when searching for nova in the service catalog. Format is:
# separated values of the form: <service_type>:<service_name>:<endpoint_type>
# (string value)
#nova_catalog_info = compute:Compute Service:publicURL
# Same as nova_catalog_info, but for admin endpoint. (string value)
#nova_catalog_admin_info = compute:Compute Service:adminURL
# Override service catalog lookup with template for nova endpoint e.g.
# http://localhost:8774/v2/%(project_id)s (string value)
#nova_endpoint_template = <None>
# Same as nova_endpoint_template, but for admin endpoint. (string value)
#nova_endpoint_admin_template = <None>
# Region name of this node (string value)
#os_region_name = <None>
# Location of ca certificates file to use for nova client requests. (string
# value)
#nova_ca_certificates_file = <None>
# Allow to perform insecure SSL requests to nova (boolean value)
#nova_api_insecure = false
# DEPRECATED: This option no longer has any affect. It is deprecated and will
# be removed in the next release. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#flashsystem_multipath_enabled = false
# DPL pool uuid in which DPL volumes are stored. (string value)
#dpl_pool =
# DPL port number. (port value)
# Minimum value: 0
# Maximum value: 65535
#dpl_port = 8357
# Request for FC Zone creating host group (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_zoning_request
#hpexp_zoning_request = false
# Type of storage command line interface (string value)
# Deprecated group/name - [DEFAULT]/hpxp_storage_cli
#hpexp_storage_cli = <None>
# ID of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_storage_id
#hpexp_storage_id = <None>
# Pool of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_pool
#hpexp_pool = <None>
# Thin pool of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_thin_pool
#hpexp_thin_pool = <None>
# Logical device range of storage system (string value)
# Deprecated group/name - [DEFAULT]/hpxp_ldev_range
#hpexp_ldev_range = <None>
# Default copy method of storage system. There are two valid values: "FULL"
# specifies that a full copy; "THIN" specifies that a thin copy. Default value
# is "FULL" (string value)
# Deprecated group/name - [DEFAULT]/hpxp_default_copy_method
#hpexp_default_copy_method = FULL
# Copy speed of storage system (integer value)
# Deprecated group/name - [DEFAULT]/hpxp_copy_speed
#hpexp_copy_speed = 3
# Interval to check copy (integer value)
# Deprecated group/name - [DEFAULT]/hpxp_copy_check_interval
#hpexp_copy_check_interval = 3
# Interval to check copy asynchronously (integer value)
# Deprecated group/name - [DEFAULT]/hpxp_async_copy_check_interval
#hpexp_async_copy_check_interval = 10
# Target port names for host group or iSCSI target (list value)
# Deprecated group/name - [DEFAULT]/hpxp_target_ports
#hpexp_target_ports = <None>
# Target port names of compute node for host group or iSCSI target (list value)
# Deprecated group/name - [DEFAULT]/hpxp_compute_target_ports
#hpexp_compute_target_ports = <None>
# Request for creating host group or iSCSI target (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_group_request
#hpexp_group_request = false
# Instance numbers for HORCM (list value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_numbers
#hpexp_horcm_numbers = 200,201
# Username of storage system for HORCM (string value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_user
#hpexp_horcm_user = <None>
# Add to HORCM configuration (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_add_conf
#hpexp_horcm_add_conf = true
# Resource group name of storage system for HORCM (string value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_resource_name
#hpexp_horcm_resource_name = meta_resource
# Only discover a specific name of host group or iSCSI target (boolean value)
# Deprecated group/name - [DEFAULT]/hpxp_horcm_name_only_discovery
#hpexp_horcm_name_only_discovery = false
# Add CHAP user (boolean value)
#hitachi_add_chap_user = false
# iSCSI authentication method (string value)
#hitachi_auth_method = <None>
# iSCSI authentication username (string value)
#hitachi_auth_user = HBSD-CHAP-user
# iSCSI authentication password (string value)
#hitachi_auth_password = HBSD-CHAP-password
# Driver to use for volume creation (string value)
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
# Timeout for creating the volume to migrate to when performing volume
# migration (seconds) (integer value)
#migration_create_volume_timeout_secs = 300
# Offload pending volume delete during volume service startup (boolean value)
#volume_service_inithost_offload = false
# FC Zoning mode configured (string value)
#zoning_mode = <None>
# User defined capabilities, a JSON formatted string specifying key/value
# pairs. The key/value pairs can be used by the CapabilitiesFilter to select
# between backends when requests specify volume types. For example, specifying
# a service level or the geographical location of a backend, then creating a
# volume type to allow the user to select by these different properties.
# (string value)
#extra_capabilities = {}
# Suppress requests library SSL certificate warnings. (boolean value)
#suppress_requests_ssl_warnings = false
# Default iSCSI Port ID of FlashSystem. (Default port is 0.) (integer value)
#flashsystem_iscsi_portid = 0
# Create volumes in this pool (string value)
#tegile_default_pool = <None>
# Create volumes in this project (string value)
#tegile_default_project = <None>
# Connection protocol should be FC. (Default is FC.) (string value)
#flashsystem_connection_protocol = FC
# Allows vdisk to multi host mapping. (Default is True) (boolean value)
#flashsystem_multihostmap_enabled = true
# Enables the Force option on upload_to_image. This enables running
# upload_volume on in-use volumes for backends that support it. (boolean value)
#enable_force_upload = false
# Create volume from snapshot at the host where snapshot resides (boolean
# value)
#snapshot_same_host = true
# Ensure that the new volumes are the same AZ as snapshot or source volume
# (boolean value)
#cloned_volume_same_az = true
# Cache volume availability zones in memory for the provided duration in
# seconds (integer value)
#az_cache_duration = 3600
# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1 (string value)
# Deprecated group/name - [DEFAULT]/hp3par_api_url
#hpe3par_api_url =
# 3PAR username with the 'edit' role (string value)
# Deprecated group/name - [DEFAULT]/hp3par_username
#hpe3par_username =
# 3PAR password for the user specified in hpe3par_username (string value)
# Deprecated group/name - [DEFAULT]/hp3par_password
#hpe3par_password =
# List of the CPG(s) to use for volume creation (list value)
# Deprecated group/name - [DEFAULT]/hp3par_cpg
#hpe3par_cpg = OpenStack
# The CPG to use for Snapshots for volumes. If empty the userCPG will be used.
# (string value)
# Deprecated group/name - [DEFAULT]/hp3par_cpg_snap
#hpe3par_cpg_snap =
# The time in hours to retain a snapshot. You can't delete it before this
# expires. (string value)
# Deprecated group/name - [DEFAULT]/hp3par_snapshot_retention
#hpe3par_snapshot_retention =
# The time in hours when a snapshot expires and is deleted. This must be
# larger than expiration (string value)
# Deprecated group/name - [DEFAULT]/hp3par_snapshot_expiration
#hpe3par_snapshot_expiration =
# Enable HTTP debugging to 3PAR (boolean value)
# Deprecated group/name - [DEFAULT]/hp3par_debug
#hpe3par_debug = false
# List of target iSCSI addresses to use. (list value)
# Deprecated group/name - [DEFAULT]/hp3par_iscsi_ips
#hpe3par_iscsi_ips =
# Enable CHAP authentication for iSCSI connections. (boolean value)
# Deprecated group/name - [DEFAULT]/hp3par_iscsi_chap_enabled
#hpe3par_iscsi_chap_enabled = false
# Datera API port. (string value)
#datera_api_port = 7717
# Datera API version. (string value)
#datera_api_version = 2
# DEPRECATED: Number of replicas to create of an inode. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#datera_num_replicas = 3
# Timeout for HTTP 503 retry messages (integer value)
#datera_503_timeout = 120
# Interval between 503 retries (integer value)
#datera_503_interval = 5
# True to set function arg and return logging (boolean value)
#datera_debug = false
# DEPRECATED: True to set acl 'allow_all' on volumes created (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#datera_acl_allow_all = false
# ONLY FOR DEBUG/TESTING PURPOSES
# True to set replica_count to 1 (boolean value)
#datera_debug_replica_count_override = false
# VPSA - Use ISER instead of iSCSI (boolean value)
#zadara_use_iser = true
# VPSA - Management Host name or IP address (string value)
#zadara_vpsa_host = <None>
# VPSA - Port number (port value)
# Minimum value: 0
# Maximum value: 65535
#zadara_vpsa_port = <None>
# VPSA - Use SSL connection (boolean value)
#zadara_vpsa_use_ssl = false
# VPSA - Username (string value)
#zadara_user = <None>
# VPSA - Password (string value)
#zadara_password = <None>
# VPSA - Storage Pool assigned for volumes (string value)
#zadara_vpsa_poolname = <None>
# VPSA - Default encryption policy for volumes (boolean value)
#zadara_vol_encrypt = false
# VPSA - Default template for VPSA volume names (string value)
#zadara_vol_name_template = OS_%s
# VPSA - Attach snapshot policy for volumes (boolean value)
#zadara_default_snap_policy = false
# List of all available devices (list value)
#available_devices =
# URL to the Quobyte volume e.g., quobyte://<DIR host>/<volume name> (string
# value)
#quobyte_volume_url = <None>
# Path to a Quobyte Client configuration file. (string value)
#quobyte_client_cfg = <None>
# Create volumes as sparse files which take no space. If set to False, volume
# is created as regular file.In such case volume creation takes a lot of time.
# (boolean value)
#quobyte_sparsed_volumes = true
# Create volumes as QCOW2 files rather than raw files. (boolean value)
#quobyte_qcow2_volumes = true
# Base dir containing the mount point for the Quobyte volume. (string value)
#quobyte_mount_point_base = $state_path/mnt
# File with the list of available vzstorage shares. (string value)
#vzstorage_shares_config = /etc/cinder/vzstorage_shares
# Create volumes as sparsed files which take no space rather than regular files
# when using raw format, in which case volume creation takes lot of time.
# (boolean value)
#vzstorage_sparsed_volumes = true
# Percent of ACTUAL usage of the underlying volume before no new volumes can be
# allocated to the volume destination. (floating point value)
#vzstorage_used_ratio = 0.95
# Base dir containing mount points for vzstorage shares. (string value)
#vzstorage_mount_point_base = $state_path/mnt
# Mount options passed to the vzstorage client. See section of the pstorage-
# mount man page for details. (list value)
#vzstorage_mount_options = <None>
# Default format that will be used when creating volumes if no volume format is
# specified. (string value)
#vzstorage_default_volume_format = raw
# File with the list of available NFS shares (string value)
#nfs_shares_config = /etc/cinder/nfs_shares
# Create volumes as sparsed files which take no space.If set to False volume is
# created as regular file.In such case volume creation takes a lot of time.
# (boolean value)
#nfs_sparsed_volumes = true
# Base dir containing mount points for NFS shares. (string value)
#nfs_mount_point_base = $state_path/mnt
# Mount options passed to the NFS client. See section of the NFS man page for
# details. (string value)
#nfs_mount_options = <None>
# The number of attempts to mount NFS shares before raising an error. At least
# one attempt will be made to mount an NFS share, regardless of the value
# specified. (integer value)
#nfs_mount_attempts = 3
#
# From oslo.config
#
# Path to a config file to use. Multiple config files can be specified, with
# values in later files taking precedence. Defaults to %(default)s. (unknown
# value)
#config_file = ~/.project/project.conf,~/project.conf,/etc/project/project.conf,/etc/project.conf
# Path to a config directory to pull *.conf files from. This file set is
# sorted, so as to provide a predictable parse order if individual options are
# over-ridden. The set is parsed after the file(s) specified via previous
# --config-file, arguments hence over-ridden options in the directory take
# precedence. (list value)
#config_dir = <None>
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
#
# From oslo.messaging
#
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2
# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>
# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack
#
# From oslo.service.periodic_task
#
# Some periodic tasks can be run in a separate process. Should we run them
# here? (boolean value)
#run_external_periodic_tasks = true
#
# From oslo.service.service
#
# Enable eventlet backdoor. Acceptable values are 0, <port>, and
# <start>:<end>, where 0 results in listening on a random tcp port number;
# <port> results in listening on the specified port number (and not enabling
# backdoor if that port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range of port numbers.
# The chosen port is displayed in the service's log file. (string value)
#backdoor_port = <None>
# Enable eventlet backdoor, using the provided path as a unix socket that can
# receive connections. This option is mutually exclusive with 'backdoor_port'
# in that only one should be provided. If both are provided then the existence
# of this option overrides the usage of that option. (string value)
#backdoor_socket = <None>
# Enables or disables logging values of all registered options when starting a
# service (at DEBUG level). (boolean value)
#log_options = true
# Specify a timeout after which a gracefully shutdown server will exit. Zero
# value means endless wait. (integer value)
#graceful_shutdown_timeout = 60
#
# From oslo.service.wsgi
#
# File name for the paste.deploy config for api service (string value)
#api_paste_config = api-paste.ini
# A python format string that is used as the template to generate log lines.
# The following values can beformatted into it: client_ip, date_time,
# request_line, status_code, body_length, wall_seconds. (string value)
#wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f
# Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not
# supported on OS X. (integer value)
#tcp_keepidle = 600
# Size of the pool of greenthreads used by wsgi (integer value)
#wsgi_default_pool_size = 100
# Maximum line size of message headers to be accepted. max_header_line may need
# to be increased when using large tokens (typically those generated when
# keystone is configured to use PKI tokens with big service catalogs). (integer
# value)
#max_header_line = 16384
# If False, closes the client socket connection explicitly. (boolean value)
#wsgi_keep_alive = true
# Timeout for client connections' socket operations. If an incoming connection
# is idle for this number of seconds it will be closed. A value of '0' means
# wait forever. (integer value)
#client_socket_timeout = 900
[BACKEND]
#
# From cinder
#
# Backend override of host value. (string value)
# Deprecated group/name - [BACKEND]/host
#backend_host = <None>
[BRCD_FABRIC_EXAMPLE]
#
# From cinder
#
# South bound connector for the fabric. (string value)
# Allowed values: SSH, HTTP, HTTPS
#fc_southbound_protocol = HTTP
# Management IP of fabric. (string value)
#fc_fabric_address =
# Fabric user ID. (string value)
#fc_fabric_user =
# Password for user. (string value)
#fc_fabric_password =
# Connecting port (port value)
# Minimum value: 0
# Maximum value: 65535
#fc_fabric_port = 22
# Local SSH certificate Path. (string value)
#fc_fabric_ssh_cert_path =
# Overridden zoning policy. (string value)
#zoning_policy = initiator-target
# Overridden zoning activation state. (boolean value)
#zone_activate = true
# Overridden zone name prefix. (string value)
#zone_name_prefix = openstack
# Virtual Fabric ID. (string value)
#fc_virtual_fabric_id = <None>
# DEPRECATED: Principal switch WWN of the fabric. This option is not used
# anymore. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#principal_switch_wwn = <None>
[CISCO_FABRIC_EXAMPLE]
#
# From cinder
#
# Management IP of fabric (string value)
#cisco_fc_fabric_address =
# Fabric user ID (string value)
#cisco_fc_fabric_user =
# Password for user (string value)
#cisco_fc_fabric_password =
# Connecting port (port value)
# Minimum value: 0
# Maximum value: 65535
#cisco_fc_fabric_port = 22
# overridden zoning policy (string value)
#cisco_zoning_policy = initiator-target
# overridden zoning activation state (boolean value)
#cisco_zone_activate = true
# overridden zone name prefix (string value)
#cisco_zone_name_prefix = <None>
# VSAN of the Fabric (string value)
#cisco_zoning_vsan = <None>
[COORDINATION]
#
# From cinder
#
# The backend URL to use for distributed coordination. (string value)
#backend_url = file://$state_path
# Number of seconds between heartbeats for distributed coordination. (floating
# point value)
#heartbeat = 1.0
# Initial number of seconds to wait after failed reconnection. (floating point
# value)
#initial_reconnect_backoff = 0.1
# Maximum number of seconds between sequential reconnection retries. (floating
# point value)
#max_reconnect_backoff = 60.0
[FC-ZONE-MANAGER]
#
# From cinder
#
# South bound connector for zoning operation (string value)
#brcd_sb_connector = HTTP
# FC Zone Driver responsible for zone management (string value)
#zone_driver = cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver
# Zoning policy configured by user; valid values include "initiator-target" or
# "initiator" (string value)
#zoning_policy = initiator-target
# Comma separated list of Fibre Channel fabric names. This list of names is
# used to retrieve other SAN credentials for connecting to each SAN fabric
# (string value)
#fc_fabric_names = <None>
# FC SAN Lookup Service (string value)
#fc_san_lookup_service = cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService
# Set this to True when you want to allow an unsupported zone manager driver to
# start. Drivers that haven't maintained a working CI system and testing are
# marked as unsupported until CI is working again. This also marks a driver as
# deprecated and may be removed in the next release. (boolean value)
#enable_unsupported_driver = false
# Southbound connector for zoning operation (string value)
#cisco_sb_connector = cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
[KEY_MANAGER]
#
# From cinder
#
# Fixed key returned by key manager, specified in hex (string value)
# Deprecated group/name - [keymgr]/fixed_key
#fixed_key = <None>
[barbican]
#
# From castellan.config
#
# Use this endpoint to connect to Barbican, for example:
# "http://localhost:9311/" (string value)
#barbican_endpoint = <None>
# Version of the Barbican API, for example: "v1" (string value)
#barbican_api_version = <None>
# Use this endpoint to connect to Keystone (string value)
#auth_endpoint = http://localhost:5000/v3
# Number of seconds to wait before retrying poll for key creation completion
# (integer value)
#retry_delay = 1
# Number of times to retry poll for key creation completion (integer value)
#number_of_retries = 60
[cors]
#
# From oslo.middleware
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-API-Version
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH,HEAD
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID,X-Trace-Info,X-Trace-HMAC,OpenStack-API-Version
[cors.subdomain]
#
# From oslo.middleware
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-API-Version
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH,HEAD
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID,X-Trace-Info,X-Trace-HMAC,OpenStack-API-Version
[database]
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
[key_manager]
#
# From castellan.config
#
# The full class name of the key manager API class (string value)
#api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager
# The type of authentication credential to create. Possible values are 'token',
# 'password', 'keystone_token', and 'keystone_password'. Required if no context
# is passed to the credential factory. (string value)
#auth_type = <None>
# Token for authentication. Required for 'token' and 'keystone_token' auth_type
# if no context is passed to the credential factory. (string value)
#token = <None>
# Username for authentication. Required for 'password' auth_type. Optional for
# the 'keystone_password' auth_type. (string value)
#username = <None>
# Password for authentication. Required for 'password' and 'keystone_password'
# auth_type. (string value)
#password = <None>
# User ID for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_id = <None>
# User's domain ID for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_domain_id = <None>
# User's domain name for authentication. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#user_domain_name = <None>
# Trust ID for trust scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#trust_id = <None>
# Domain ID for domain scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#domain_id = <None>
# Domain name for domain scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#domain_name = <None>
# Project ID for project scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_id = <None>
# Project name for project scoping. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_name = <None>
# Project's domain ID for project. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_domain_id = <None>
# Project's domain name for project. Optional for 'keystone_token' and
# 'keystone_password' auth_type. (string value)
#project_domain_name = <None>
# Allow fetching a new token if the current one is going to expire. Optional
# for 'keystone_token' and 'keystone_password' auth_type. (boolean value)
#reauthenticate = true
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should ideally be unversioned, client support in the
# wild varies. If you're using a versioned v2 endpoint here, then this should
# *not* be the same endpoint the service user utilizes for validating tokens,
# because normal end users may not be able to reach that endpoint. (string
# value)
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3
# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance. Only
# valid for PKI tokens. (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive
# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false
# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
[matchmaker_redis]
#
# From oslo.messaging
#
# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1
# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379
# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =
# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000
[oslo_concurrency]
#
# From oslo.concurrency
#
# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false
# Directory to use for lock files. For security, the specified directory
# should only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1
# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2
# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30
# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10
# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30
# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30
# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30
# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy' - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic' - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc
# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify
# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast
# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast
# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast
# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>
# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>
# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200
# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100
# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100
[oslo_messaging_notifications]
#
# From oslo.messaging
#
# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>
# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost
# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672
# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest
# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30
# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0
# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800
# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0
# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
# Maximum number of channels to allow (integer value)
#channel_max = <None>
# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>
# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3
# Enable SSL (boolean value)
#ssl = <None>
# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>
# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25
# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25
# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25
# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single
# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30
# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0
# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30
# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600
# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60
# Persist notification messages. (boolean value)
#notification_persistence = false
# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification
# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25
# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60
# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc
# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply
# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100
# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25
# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25
[oslo_messaging_zmq]
#
# From oslo.messaging
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
[oslo_middleware]
#
# From oslo.middleware
#
# The maximum body size for each request, in bytes. (integer value)
# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
# Deprecated group/name - [DEFAULT]/max_request_body_size
#max_request_body_size = 114688
# DEPRECATED: The HTTP Header that will be used to determine what the original
# request protocol scheme was, even if it was hidden by a SSL termination
# proxy. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#secure_proxy_ssl_header = X-Forwarded-Proto
# Whether the application is behind a proxy or not. This determines if the
# middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
[oslo_reports]
#
# From oslo.reports
#
# Path to a log directory where to create a file (string value)
#log_dir = <None>
# The path to a file to watch for changes to trigger the reports, instead of
# signals. Setting this option disables the signal trigger for the reports. If
# application is running as a WSGI application it is recommended to use this
# instead of signals. (string value)
#file_event_handler = <None>
# How many seconds to wait between polls when file_event_handler is set
# (integer value)
#file_event_handler_interval = 1
[oslo_versionedobjects]
#
# From oslo.versionedobjects
#
# Make exception message format errors fatal (boolean value)
#fatal_exception_format_errors = false
[ssl]
#
# From oslo.service.sslutils
#
# CA certificate file to use to verify connecting clients. (string value)
# Deprecated group/name - [DEFAULT]/ssl_ca_file
#ca_file = <None>
# Certificate file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_cert_file
#cert_file = <None>
# Private key file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_key_file
#key_file = <None>
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
#version = <None>
# Sets the list of available ciphers. value should be a string in the OpenSSL
# cipher list format. (string value)
#ciphers = <None>
Use the api-paste.ini
file to configure the Block Storage API
service.
#############
# OpenStack #
#############
[composite:osapi_volume]
use = call:cinder.api:root_app_factory
/: apiversions
/v1: openstack_volume_api_v1
/v2: openstack_volume_api_v2
/v3: openstack_volume_api_v3
[composite:openstack_volume_api_v1]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv1
keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1
keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv1
[composite:openstack_volume_api_v2]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv2
keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv2
keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv2
[composite:openstack_volume_api_v3]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv3
keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv3
keystone_nolimit = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv3
[filter:request_id]
paste.filter_factory = oslo_middleware.request_id:RequestId.factory
[filter:http_proxy_to_wsgi]
paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = cinder
[filter:faultwrap]
paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory
[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
[filter:noauth]
paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory
[filter:sizelimit]
paste.filter_factory = oslo_middleware.sizelimit:RequestBodySizeLimiter.factory
[app:apiv1]
paste.app_factory = cinder.api.v1.router:APIRouter.factory
[app:apiv2]
paste.app_factory = cinder.api.v2.router:APIRouter.factory
[app:apiv3]
paste.app_factory = cinder.api.v3.router:APIRouter.factory
[pipeline:apiversions]
pipeline = cors http_proxy_to_wsgi faultwrap osvolumeversionapp
[app:osvolumeversionapp]
paste.app_factory = cinder.api.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
The policy.json
file defines additional access controls that apply
to the Block Storage service.
{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"admin_api": "is_admin:True",
"volume:create": "",
"volume:delete": "rule:admin_or_owner",
"volume:get": "rule:admin_or_owner",
"volume:get_all": "rule:admin_or_owner",
"volume:get_volume_metadata": "rule:admin_or_owner",
"volume:create_volume_metadata": "rule:admin_or_owner",
"volume:delete_volume_metadata": "rule:admin_or_owner",
"volume:update_volume_metadata": "rule:admin_or_owner",
"volume:get_volume_admin_metadata": "rule:admin_api",
"volume:update_volume_admin_metadata": "rule:admin_api",
"volume:get_snapshot": "rule:admin_or_owner",
"volume:get_all_snapshots": "rule:admin_or_owner",
"volume:create_snapshot": "rule:admin_or_owner",
"volume:delete_snapshot": "rule:admin_or_owner",
"volume:update_snapshot": "rule:admin_or_owner",
"volume:get_snapshot_metadata": "rule:admin_or_owner",
"volume:delete_snapshot_metadata": "rule:admin_or_owner",
"volume:update_snapshot_metadata": "rule:admin_or_owner",
"volume:extend": "rule:admin_or_owner",
"volume:update_readonly_flag": "rule:admin_or_owner",
"volume:retype": "rule:admin_or_owner",
"volume:update": "rule:admin_or_owner",
"volume_extension:types_manage": "rule:admin_api",
"volume_extension:types_extra_specs": "rule:admin_api",
"volume_extension:access_types_qos_specs_id": "rule:admin_api",
"volume_extension:access_types_extra_specs": "rule:admin_api",
"volume_extension:volume_type_access": "rule:admin_or_owner",
"volume_extension:volume_type_access:addProjectAccess": "rule:admin_api",
"volume_extension:volume_type_access:removeProjectAccess": "rule:admin_api",
"volume_extension:volume_type_encryption": "rule:admin_api",
"volume_extension:volume_encryption_metadata": "rule:admin_or_owner",
"volume_extension:extended_snapshot_attributes": "rule:admin_or_owner",
"volume_extension:volume_image_metadata": "rule:admin_or_owner",
"volume_extension:quotas:show": "",
"volume_extension:quotas:update": "rule:admin_api",
"volume_extension:quotas:delete": "rule:admin_api",
"volume_extension:quota_classes": "rule:admin_api",
"volume_extension:quota_classes:validate_setup_for_nested_quota_use": "rule:admin_api",
"volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
"volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
"volume_extension:backup_admin_actions:reset_status": "rule:admin_api",
"volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
"volume_extension:volume_admin_actions:force_detach": "rule:admin_api",
"volume_extension:snapshot_admin_actions:force_delete": "rule:admin_api",
"volume_extension:backup_admin_actions:force_delete": "rule:admin_api",
"volume_extension:volume_admin_actions:migrate_volume": "rule:admin_api",
"volume_extension:volume_admin_actions:migrate_volume_completion": "rule:admin_api",
"volume_extension:volume_actions:upload_public": "rule:admin_api",
"volume_extension:volume_actions:upload_image": "rule:admin_or_owner",
"volume_extension:volume_host_attribute": "rule:admin_api",
"volume_extension:volume_tenant_attribute": "rule:admin_or_owner",
"volume_extension:volume_mig_status_attribute": "rule:admin_api",
"volume_extension:hosts": "rule:admin_api",
"volume_extension:services:index": "rule:admin_api",
"volume_extension:services:update" : "rule:admin_api",
"volume_extension:volume_manage": "rule:admin_api",
"volume_extension:volume_unmanage": "rule:admin_api",
"volume_extension:list_manageable": "rule:admin_api",
"volume_extension:capabilities": "rule:admin_api",
"volume:create_transfer": "rule:admin_or_owner",
"volume:accept_transfer": "",
"volume:delete_transfer": "rule:admin_or_owner",
"volume:get_transfer": "rule:admin_or_owner",
"volume:get_all_transfers": "rule:admin_or_owner",
"volume_extension:replication:promote": "rule:admin_api",
"volume_extension:replication:reenable": "rule:admin_api",
"volume:failover_host": "rule:admin_api",
"volume:freeze_host": "rule:admin_api",
"volume:thaw_host": "rule:admin_api",
"backup:create" : "",
"backup:delete": "rule:admin_or_owner",
"backup:get": "rule:admin_or_owner",
"backup:get_all": "rule:admin_or_owner",
"backup:restore": "rule:admin_or_owner",
"backup:backup-import": "rule:admin_api",
"backup:backup-export": "rule:admin_api",
"backup:update": "rule:admin_or_owner",
"snapshot_extension:snapshot_actions:update_snapshot_status": "",
"snapshot_extension:snapshot_manage": "rule:admin_api",
"snapshot_extension:snapshot_unmanage": "rule:admin_api",
"snapshot_extension:list_manageable": "rule:admin_api",
"consistencygroup:create" : "group:nobody",
"consistencygroup:delete": "group:nobody",
"consistencygroup:update": "group:nobody",
"consistencygroup:get": "group:nobody",
"consistencygroup:get_all": "group:nobody",
"consistencygroup:create_cgsnapshot" : "group:nobody",
"consistencygroup:delete_cgsnapshot": "group:nobody",
"consistencygroup:get_cgsnapshot": "group:nobody",
"consistencygroup:get_all_cgsnapshots": "group:nobody",
"group:group_types_manage": "rule:admin_api",
"group:group_types_specs": "rule:admin_api",
"group:access_group_types_specs": "rule:admin_api",
"group:group_type_access": "rule:admin_or_owner",
"group:create" : "",
"group:delete": "rule:admin_or_owner",
"group:update": "rule:admin_or_owner",
"group:get": "rule:admin_or_owner",
"group:get_all": "rule:admin_or_owner",
"group:create_group_snapshot": "",
"group:delete_group_snapshot": "rule:admin_or_owner",
"group:update_group_snapshot": "rule:admin_or_owner",
"group:get_group_snapshot": "rule:admin_or_owner",
"group:get_all_group_snapshots": "rule:admin_or_owner",
"scheduler_extension:scheduler_stats:get_pools" : "rule:admin_api",
"message:delete": "rule:admin_or_owner",
"message:get": "rule:admin_or_owner",
"message:get_all": "rule:admin_or_owner",
"clusters:get": "rule:admin_api",
"clusters:get_all": "rule:admin_api",
"clusters:update": "rule:admin_api"
}
The rootwrap.conf
file defines configuration values used by the
rootwrap
script when the Block Storage service must escalate its
privileges to those of the root user.
# Configuration for cinder-rootwrap
# This file should be owned by (and only-writeable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin
# Enable logging to syslog
# Default value is False
use_syslog=False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR
Option = default value | (Type) Help string |
---|---|
[DEFAULT] additional_retry_list = |
(StrOpt) FSS additional retry list, separate by ; |
[DEFAULT] backup_swift_project = None |
(StrOpt) Swift project/account name. Required when connecting to an auth 3.0 system |
[DEFAULT] backup_swift_project_domain = None |
(StrOpt) Swift project domain name. Required when connecting to an auth 3.0 system |
[DEFAULT] backup_swift_user_domain = None |
(StrOpt) Swift user domain name. Required when connecting to an auth 3.0 system |
[DEFAULT] backup_use_temp_snapshot = False |
(BoolOpt) If this is set to True, the backup_use_temp_snapshot path will be used during the backup. Otherwise, it will use backup_use_temp_volume path. |
[DEFAULT] chap = disabled |
(StrOpt) CHAP authentication mode, effective only for iscsi (disabled|enabled) |
[DEFAULT] clone_volume_timeout = 680 |
(IntOpt) Create clone volume timeout. |
[DEFAULT] cluster = None |
(StrOpt) Name of this cluster. Used to group volume hosts that share the same backend configurations to work in HA Active-Active mode. Active-Active is not yet supported. |
[DEFAULT] connection_type = iscsi |
(StrOpt) Connection type to the IBM Storage Array |
[DEFAULT] coprhd_emulate_snapshot = False |
(BoolOpt) True | False to indicate if the storage array in CoprHD is VMAX or VPLEX |
[DEFAULT] coprhd_hostname = None |
(StrOpt) Hostname for the CoprHD Instance |
[DEFAULT] coprhd_password = None |
(StrOpt) Password for accessing the CoprHD Instance |
[DEFAULT] coprhd_port = 4443 |
(PortOpt) Port for the CoprHD Instance |
[DEFAULT] coprhd_project = None |
(StrOpt) Project to utilize within the CoprHD Instance |
[DEFAULT] coprhd_scaleio_rest_gateway_host = None |
(StrOpt) Rest Gateway IP or FQDN for Scaleio |
[DEFAULT] coprhd_scaleio_rest_gateway_port = 4984 |
(PortOpt) Rest Gateway Port for Scaleio |
[DEFAULT] coprhd_scaleio_rest_server_password = None |
(StrOpt) Rest Gateway Password |
[DEFAULT] coprhd_scaleio_rest_server_username = None |
(StrOpt) Username for Rest Gateway |
[DEFAULT] coprhd_tenant = None |
(StrOpt) Tenant to utilize within the CoprHD Instance |
[DEFAULT] coprhd_username = None |
(StrOpt) Username for accessing the CoprHD Instance |
[DEFAULT] coprhd_varray = None |
(StrOpt) Virtual Array to utilize within the CoprHD Instance |
[DEFAULT] datera_503_interval = 5 |
(IntOpt) Interval between 503 retries |
[DEFAULT] datera_503_timeout = 120 |
(IntOpt) Timeout for HTTP 503 retry messages |
[DEFAULT] datera_acl_allow_all = False |
(BoolOpt) True to set acl ‘allow_all’ on volumes created |
[DEFAULT] datera_debug = False |
(BoolOpt) True to set function arg and return logging |
[DEFAULT] datera_debug_replica_count_override = False |
(BoolOpt) ONLY FOR DEBUG/TESTING PURPOSES True to set replica_count to 1 |
[DEFAULT] default_group_type = None |
(StrOpt) Default group type to use |
[DEFAULT] dell_server_os = Red Hat Linux 6.x |
(StrOpt) Server OS type to use when creating a new server on the Storage Center. |
[DEFAULT] drbdmanage_disk_options = {"c-min-rate": "4M"} |
(StrOpt) Disk options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. |
[DEFAULT] drbdmanage_net_options = {"connect-int": "4", "allow-two-primaries": "yes", "ko-count": "30", "max-buffers": "20000", "ping-timeout": "100"} |
(StrOpt) Net options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. |
[DEFAULT] drbdmanage_resource_options = {"auto-promote-timeout": "300"} |
(StrOpt) Resource options to set on new resources. See http://www.drbd.org/en/doc/users-guide-90/re-drbdconf for all the details. |
[DEFAULT] dsware_isthin = False |
(BoolOpt) The flag of thin storage allocation. |
[DEFAULT] dsware_manager = |
(StrOpt) Fusionstorage manager ip addr for cinder-volume. |
[DEFAULT] enable_unsupported_driver = False |
(BoolOpt) Set this to True when you want to allow an unsupported driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release. |
[DEFAULT] fss_debug = False |
(BoolOpt) Enable HTTP debugging to FSS |
[DEFAULT] fss_pool = |
(IntOpt) FSS pool id in which FalconStor volumes are stored. |
[DEFAULT] fusionstorageagent = |
(StrOpt) Fusionstorage agent ip addr range. |
[DEFAULT] glance_catalog_info = image:glance:publicURL |
(StrOpt) Info to match when looking for glance in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> - Only used if glance_api_servers are not provided. |
[DEFAULT] group_api_class = cinder.group.api.API |
(StrOpt) The full class name of the group API class |
[DEFAULT] hnas_chap_enabled = True |
(BoolOpt) Whether the chap authentication is enabled in the iSCSI target or not. |
[DEFAULT] hnas_cluster_admin_ip0 = None |
(StrOpt) The IP of the HNAS cluster admin. Required only for HNAS multi-cluster setups. |
[DEFAULT] hnas_mgmt_ip0 = None |
(IPOpt) Management IP address of HNAS. This can be any IP in the admin address on HNAS or the SMU IP. |
[DEFAULT] hnas_password = None |
(StrOpt) HNAS password. |
[DEFAULT] hnas_ssc_cmd = ssc |
(StrOpt) Command to communicate to HNAS. |
[DEFAULT] hnas_ssh_port = 22 |
(PortOpt) Port to be used for SSH authentication. |
[DEFAULT] hnas_ssh_private_key = None |
(StrOpt) Path to the SSH private key used to authenticate in HNAS SMU. |
[DEFAULT] hnas_svc0_hdp = None |
(StrOpt) Service 0 HDP |
[DEFAULT] hnas_svc0_iscsi_ip = None |
(IPOpt) Service 0 iSCSI IP |
[DEFAULT] hnas_svc0_volume_type = None |
(StrOpt) Service 0 volume type |
[DEFAULT] hnas_svc1_hdp = None |
(StrOpt) Service 1 HDP |
[DEFAULT] hnas_svc1_iscsi_ip = None |
(IPOpt) Service 1 iSCSI IP |
[DEFAULT] hnas_svc1_volume_type = None |
(StrOpt) Service 1 volume type |
[DEFAULT] hnas_svc2_hdp = None |
(StrOpt) Service 2 HDP |
[DEFAULT] hnas_svc2_iscsi_ip = None |
(IPOpt) Service 2 iSCSI IP |
[DEFAULT] hnas_svc2_volume_type = None |
(StrOpt) Service 2 volume type |
[DEFAULT] hnas_svc3_hdp = None |
(StrOpt) Service 3 HDP |
[DEFAULT] hnas_svc3_iscsi_ip = None |
(IPOpt) Service 3 iSCSI IP |
[DEFAULT] hnas_svc3_volume_type = None |
(StrOpt) Service 3 volume type |
[DEFAULT] hnas_username = None |
(StrOpt) HNAS username. |
[DEFAULT] kaminario_nodedup_substring = K2-nodedup |
(StrOpt) If volume-type name contains this substring nodedup volume will be created, otherwise dedup volume wil be created. |
[DEFAULT] lvm_suppress_fd_warnings = False |
(BoolOpt) Suppress leaked file descriptor warnings in LVM commands. |
[DEFAULT] message_ttl = 2592000 |
(IntOpt) message minimum life in seconds. |
[DEFAULT] metro_domain_name = None |
(StrOpt) The remote metro device domain name. |
[DEFAULT] metro_san_address = None |
(StrOpt) The remote metro device request url. |
[DEFAULT] metro_san_password = None |
(StrOpt) The remote metro device san password. |
[DEFAULT] metro_san_user = None |
(StrOpt) The remote metro device san user. |
[DEFAULT] metro_storage_pools = None |
(StrOpt) The remote metro device pool names. |
[DEFAULT] nas_host = |
(StrOpt) IP address or Hostname of NAS system. |
[DEFAULT] netapp_replication_aggregate_map = None |
(MultiOpt) Multi opt of dictionaries to represent the aggregate mapping between source and destination back ends when using whole back end replication. For every source aggregate associated with a cinder pool (NetApp FlexVol), you would need to specify the destination aggregate on the replication target device. A replication target device is configured with the configuration option replication_device. Specify this option as many times as you have replication devices. Each entry takes the standard dict config form: netapp_replication_aggregate_map = backend_id:<name_of_replication_device_section>,src_aggr_name1:dest_aggr_name1,src_aggr_name2:dest_aggr_name2,... |
[DEFAULT] netapp_snapmirror_quiesce_timeout = 3600 |
(IntOpt) The maximum time in seconds to wait for existing SnapMirror transfers to complete before aborting during a failover. |
[DEFAULT] nexenta_nbd_symlinks_dir = /dev/disk/by-path |
(StrOpt) NexentaEdge logical path of directory to store symbolic links to NBDs |
[DEFAULT] osapi_volume_use_ssl = False |
(BoolOpt) Wraps the socket in a SSL context if True is set. A certificate file and key file must be specified. |
[DEFAULT] pool_id_filter = |
(ListOpt) Pool id permit to use. |
[DEFAULT] pool_type = default |
(StrOpt) Pool type, like sata-2copy. |
[DEFAULT] proxy = storage.proxy.IBMStorageProxy |
(StrOpt) Proxy driver that connects to the IBM Storage Array |
[DEFAULT] quota_groups = 10 |
(IntOpt) Number of groups allowed per project |
[DEFAULT] scaleio_server_certificate_path = None |
(StrOpt) Server certificate path |
[DEFAULT] scaleio_verify_server_certificate = False |
(BoolOpt) verify server certificate |
[DEFAULT] scheduler_weight_handler = cinder.scheduler.weights.OrderedHostWeightHandler |
(StrOpt) Which handler to use for selecting the host/pool after weighing |
[DEFAULT] secondary_san_ip = |
(StrOpt) IP address of secondary DSM controller |
[DEFAULT] secondary_san_login = Admin |
(StrOpt) Secondary DSM user name |
[DEFAULT] secondary_san_password = |
(StrOpt) Secondary DSM user password name |
[DEFAULT] secondary_sc_api_port = 3033 |
(PortOpt) Secondary Dell API port |
[DEFAULT] sio_max_over_subscription_ratio = 10.0 |
(FloatOpt) max_over_subscription_ratio setting for the ScaleIO driver. This replaces the general max_over_subscription_ratio which has no effect in this driver.Maximum value allowed for ScaleIO is 10.0. |
[DEFAULT] storage_protocol = iscsi |
(StrOpt) Protocol for transferring data between host and storage back-end. |
[DEFAULT] synology_admin_port = 5000 |
(PortOpt) Management port for Synology storage. |
[DEFAULT] synology_device_id = None |
(StrOpt) Device id for skip one time password check for logging in Synology storage if OTP is enabled. |
[DEFAULT] synology_one_time_pass = None |
(StrOpt) One time password of administrator for logging in Synology storage if OTP is enabled. |
[DEFAULT] synology_password = |
(StrOpt) Password of administrator for logging in Synology storage. |
[DEFAULT] synology_pool_name = |
(StrOpt) Volume on Synology storage to be used for creating lun. |
[DEFAULT] synology_ssl_verify = True |
(BoolOpt) Do certificate validation or not if $driver_use_ssl is True |
[DEFAULT] synology_username = admin |
(StrOpt) Administrator of Synology storage. |
[DEFAULT] violin_dedup_capable_pools = |
(ListOpt) Storage pools capable of dedup and other luns.(Comma separated list) |
[DEFAULT] violin_dedup_only_pools = |
(ListOpt) Storage pools to be used to setup dedup luns only.(Comma separated list) |
[DEFAULT] violin_iscsi_target_ips = |
(ListOpt) Target iSCSI addresses to use.(Comma separated list) |
[DEFAULT] violin_pool_allocation_method = random |
(StrOpt) Method of choosing a storage pool for a lun. |
[DEFAULT] vzstorage_default_volume_format = raw |
(StrOpt) Default format that will be used when creating volumes if no volume format is specified. |
[DEFAULT] zadara_default_snap_policy = False |
(BoolOpt) VPSA - Attach snapshot policy for volumes |
[DEFAULT] zadara_password = None |
(StrOpt) VPSA - Password |
[DEFAULT] zadara_use_iser = True |
(BoolOpt) VPSA - Use ISER instead of iSCSI |
[DEFAULT] zadara_user = None |
(StrOpt) VPSA - Username |
[DEFAULT] zadara_vol_encrypt = False |
(BoolOpt) VPSA - Default encryption policy for volumes |
[DEFAULT] zadara_vol_name_template = OS_%s |
(StrOpt) VPSA - Default template for VPSA volume names |
[DEFAULT] zadara_vpsa_host = None |
(StrOpt) VPSA - Management Host name or IP address |
[DEFAULT] zadara_vpsa_poolname = None |
(StrOpt) VPSA - Storage Pool assigned for volumes |
[DEFAULT] zadara_vpsa_port = None |
(PortOpt) VPSA - Port number |
[DEFAULT] zadara_vpsa_use_ssl = False |
(BoolOpt) VPSA - Use SSL connection |
[DEFAULT] zteAheadReadSize = 8 |
(IntOpt) Cache readahead size. |
[DEFAULT] zteCachePolicy = 1 |
(IntOpt) Cache policy. 0, Write Back; 1, Write Through. |
[DEFAULT] zteChunkSize = 4 |
(IntOpt) Virtual block size of pool. Unit : KB. Valid value : 4, 8, 16, 32, 64, 128, 256, 512. |
[DEFAULT] zteControllerIP0 = None |
(IPOpt) Main controller IP. |
[DEFAULT] zteControllerIP1 = None |
(IPOpt) Slave controller IP. |
[DEFAULT] zteLocalIP = None |
(IPOpt) Local IP. |
[DEFAULT] ztePoolVoAllocatedPolicy = 0 |
(IntOpt) Pool volume allocated policy. 0, Auto; 1, High Performance Tier First; 2, Performance Tier First; 3, Capacity Tier First. |
[DEFAULT] ztePoolVolAlarmStopAllocatedFlag = 0 |
(IntOpt) Pool volume alarm stop allocated flag. |
[DEFAULT] ztePoolVolAlarmThreshold = 0 |
(IntOpt) Pool volume alarm threshold. [0, 100] |
[DEFAULT] ztePoolVolInitAllocatedCapacity = 0 |
(IntOpt) Pool volume init allocated Capacity.Unit : KB. |
[DEFAULT] ztePoolVolIsThin = False |
(IntOpt) Whether it is a thin volume. |
[DEFAULT] ztePoolVolMovePolicy = 0 |
(IntOpt) Pool volume move policy.0, Auto; 1, Highest Available; 2, Lowest Available; 3, No Relocation. |
[DEFAULT] zteSSDCacheSwitch = 1 |
(IntOpt) SSD cache switch. 0, OFF; 1, ON. |
[DEFAULT] zteStoragePool = |
(ListOpt) Pool name list. |
[DEFAULT] zteUserName = None |
(StrOpt) User name. |
[DEFAULT] zteUserPassword = None |
(StrOpt) User password. |
[barbican] auth_endpoint = http://localhost:5000/v3 |
(StrOpt) Use this endpoint to connect to Keystone |
[barbican] barbican_api_version = None |
(StrOpt) Version of the Barbican API, for example: “v1” |
[barbican] barbican_endpoint = None |
(StrOpt) Use this endpoint to connect to Barbican, for example: “http://localhost:9311/“ |
[barbican] number_of_retries = 60 |
(IntOpt) Number of times to retry poll for key creation completion |
[barbican] retry_delay = 1 |
(IntOpt) Number of seconds to wait before retrying poll for key creation completion |
[fc-zone-manager] enable_unsupported_driver = False |
(BoolOpt) Set this to True when you want to allow an unsupported zone manager driver to start. Drivers that haven’t maintained a working CI system and testing are marked as unsupported until CI is working again. This also marks a driver as deprecated and may be removed in the next release. |
[key_manager] api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager |
(StrOpt) The full class name of the key manager API class |
[key_manager] fixed_key = None |
(StrOpt) Fixed key returned by key manager, specified in hex |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] backup_service_inithost_offload |
False |
True |
[DEFAULT] datera_num_replicas |
1 |
3 |
[DEFAULT] default_timeout |
525600 |
31536000 |
[DEFAULT] glance_api_servers |
$glance_host:$glance_port |
None |
[DEFAULT] io_port_list |
* |
None |
[DEFAULT] iscsi_initiators |
None |
|
[DEFAULT] naviseccli_path |
None |
|
[DEFAULT] nexenta_chunksize |
16384 |
32768 |
[DEFAULT] query_volume_filters |
name, status, metadata, availability_zone, bootable |
name, status, metadata, availability_zone, bootable, group_id |
[DEFAULT] vmware_task_poll_interval |
0.5 |
2.0 |
Deprecated option | New Option |
---|---|
[DEFAULT] enable_v1_api |
None |
[DEFAULT] enable_v2_api |
None |
[DEFAULT] eqlx_chap_login |
[DEFAULT] chap_username |
[DEFAULT] eqlx_chap_password |
[DEFAULT] chap_password |
[DEFAULT] eqlx_use_chap |
[DEFAULT] use_chap_auth |
[DEFAULT] host |
[DEFAULT] backend_host |
[DEFAULT] nas_ip |
[DEFAULT] nas_host |
[DEFAULT] osapi_max_request_body_size |
[oslo_middleware] max_request_body_size |
[DEFAULT] use_syslog |
None |
[hyperv] force_volumeutils_v1 |
None |
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The Block Storage service works with many different storage drivers that you can configure by using these instructions.
The Clustering API can be configured by changing the following options:
Configuration option = Default value | Description |
---|---|
[authentication] | |
auth_url = |
(String) Complete public identity V3 API endpoint. |
service_password = |
(String) Password specified for the Senlin service user. |
service_project_domain = Default |
(String) Name of the domain for the service project. |
service_project_name = service |
(String) Name of the service project. |
service_user_domain = Default |
(String) Name of the domain for the service user. |
service_username = senlin |
(String) Senlin service user name |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[oslo_policy] | |
policy_default_rule = default |
(String) Default rule. Enforced when a requested rule is not found. |
policy_dirs = ['policy.d'] |
(Multi-valued) Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. |
policy_file = policy.json |
(String) The JSON file that defines policies. |
[revision] | |
senlin_api_revision = 1.0 |
(String) Senlin API revision. |
senlin_engine_revision = 1.0 |
(String) Senlin engine revision. |
[senlin_api] | |
api_paste_config = api-paste.ini |
(String) The API paste config file to use. |
backlog = 4096 |
(Integer) Number of backlog requests to configure the socket with. |
bind_host = 0.0.0.0 |
(IP) Address to bind the server. Useful when selecting a particular network interface. |
bind_port = 8778 |
(Port number) The port on which the server will listen. |
cert_file = None |
(String) Location of the SSL certificate file to use for SSL mode. |
client_socket_timeout = 900 |
(Integer) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ indicates waiting forever. |
key_file = None |
(String) Location of the SSL key file to use for enabling SSL mode. |
max_header_line = 16384 |
(Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). |
max_json_body_size = 1048576 |
(Integer) Maximum raw byte size of JSON request body. Should be larger than max_template_size. |
tcp_keepidle = 600 |
(Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. |
workers = 0 |
(Integer) Number of workers for Senlin service. |
wsgi_keep_alive = True |
(Boolean) If false, closes the client socket explicitly. |
These options can also be set in the senlin.conf
file.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
batch_interval = 3 |
(Integer) Seconds to pause between scheduling two consecutive batches of node actions. |
cloud_backend = openstack |
(String) Default cloud backend to use. |
default_action_timeout = 3600 |
(Integer) Timeout in seconds for actions. |
default_region_name = None |
(String) Default region name used to get services endpoints. |
engine_life_check_timeout = 2 |
(Integer) RPC timeout for the engine liveness check that is used for cluster locking. |
environment_dir = /etc/senlin/environments |
(String) The directory to search for environment files. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
fatal_deprecations = False |
(Boolean) Enables or disables fatal status of deprecations. |
host = localhost |
(String) Name of the engine node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. |
lock_retry_interval = 10 |
(Integer) Number of seconds between lock retries. |
lock_retry_times = 3 |
(Integer) Number of times trying to grab a lock. |
max_actions_per_batch = 0 |
(Integer) Maximum number of node actions that each engine worker can schedule consecutively per batch. 0 means no limit. |
max_clusters_per_project = 100 |
(Integer) Maximum number of clusters any one project may have active at one time. |
max_nodes_per_cluster = 1000 |
(Integer) Maximum nodes allowed per top-level cluster. |
max_response_size = 524288 |
(Integer) Maximum raw byte size of data from web response. |
name_unique = False |
(Boolean) Flag to indicate whether to enforce unique names for Senlin objects belonging to the same project. |
num_engine_workers = 1 |
(Integer) Number of senlin-engine processes to fork and run. |
periodic_fuzzy_delay = 10 |
(Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) |
periodic_interval = 60 |
(Integer) Seconds between running periodic tasks. |
periodic_interval_max = 120 |
(Integer) Seconds between periodic tasks to be called |
publish_errors = False |
(Boolean) Enables or disables publication of error events. |
use_router_proxy = True |
(Boolean) Use ROUTER remote proxy. |
[health_manager] | |
nova_control_exchange = nova |
(String) Exchange name for nova notifications |
[oslo_versionedobjects] | |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal |
[webhook] | |
host = None |
(String) Address for invoking webhooks. It is useful for cases where proxies are used for triggering webhooks. Default to the hostname of the API node. |
port = 8778 |
(Port number) The port on which a webhook will be invoked. Useful when service is running behind a proxy. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[zaqar] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
timeout = None |
(Integer) Timeout value for http requests |
Option = default value | (Type) Help string |
---|---|
[DEFAULT] batch_interval = 3 |
(IntOpt) Seconds to pause between scheduling two consecutive batches of node actions. |
[DEFAULT] periodic_fuzzy_delay = 10 |
(IntOpt) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) |
[health_manager] nova_control_exchange = nova |
(StrOpt) Exchange name for nova notifications |
[oslo_versionedobjects] fatal_exception_format_errors = False |
(BoolOpt) Make exception message format errors fatal |
[senlin_api] api_paste_config = api-paste.ini |
(StrOpt) The API paste config file to use. |
[senlin_api] client_socket_timeout = 900 |
(IntOpt) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ indicates waiting forever. |
[senlin_api] max_json_body_size = 1048576 |
(IntOpt) Maximum raw byte size of JSON request body. Should be larger than max_template_size. |
[senlin_api] wsgi_keep_alive = True |
(BoolOpt) If false, closes the client socket explicitly. |
[zaqar] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[zaqar] auth_type = None |
(Opt) Authentication type to load |
[zaqar] cafile = None |
(StrOpt) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
[zaqar] certfile = None |
(StrOpt) PEM encoded client certificate cert file |
[zaqar] insecure = False |
(BoolOpt) Verify HTTPS connections. |
[zaqar] keyfile = None |
(StrOpt) PEM encoded client certificate key file |
[zaqar] timeout = None |
(IntOpt) Timeout value for http requests |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] max_actions_per_batch |
10 |
0 |
[DEFAULT] periodic_interval_max |
60 |
120 |
[webhook] host |
localhost |
None |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
The Clustering service implements clustering services and libraries for
managing groups of homogeneous objects exposed by other OpenStack services.
The configuration file for this service is /etc/senlin/senlin.conf
.
Note
The common configurations for shared services and libraries, such as database connections and RPC messaging, are described at Common configurations.
The Compute service is a cloud computing fabric controller, which is the main part of an Infrastructure as a Service (IaaS) system. You can use OpenStack Compute to host and manage cloud computing systems. This section describes the Compute service configuration options.
To configure your Compute installation, you must define configuration options in these files:
nova.conf
contains most of the Compute configuration options and
resides in the /etc/nova
directory.api-paste.ini
defines Compute limits and resides in the
/etc/nova
directory.For a quick overview:
For a complete list of all available configuration options for each
OpenStack Compute service, run
bin/nova-<servicename> --help
.
Configuration option = Default value | Description |
---|---|
[api_database] | |
connection = None |
(String) No help text available for this option. |
connection_debug = 0 |
(Integer) No help text available for this option. |
connection_trace = False |
(Boolean) No help text available for this option. |
idle_timeout = 3600 |
(Integer) No help text available for this option. |
max_overflow = None |
(Integer) No help text available for this option. |
max_pool_size = None |
(Integer) No help text available for this option. |
max_retries = 10 |
(Integer) No help text available for this option. |
mysql_sql_mode = TRADITIONAL |
(String) No help text available for this option. |
pool_timeout = None |
(Integer) No help text available for this option. |
retry_interval = 10 |
(Integer) No help text available for this option. |
slave_connection = None |
(String) No help text available for this option. |
sqlite_synchronous = True |
(Boolean) No help text available for this option. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
auth_strategy = keystone |
(String) This determines the strategy to use for authentication: keystone or noauth2. ‘noauth2’ is designed for testing only, as it does no actual credential checking. ‘noauth2’ provides administrative credentials only if ‘admin’ is specified as the username. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
default_availability_zone = nova |
(String) Default compute node availability_zone. This option determines the availability zone to be used when it is not specified in the VM creation request. If this option is not set, the default availability zone ‘nova’ is used. Possible values:
|
default_schedule_zone = None |
(String) Availability zone to use when user doesn’t specify one. This option is used by the scheduler to determine which availability zone to place a new VM instance into if the user did not specify one at the time of VM boot request. Possible values:
|
internal_service_availability_zone = internal |
(String) This option specifies the name of the availability zone for the internal services. Services like nova-scheduler, nova-network, nova-conductor are internal services. These services will appear in their own internal availability_zone. Possible values:
|
Configuration option = Default value | Description |
---|---|
[barbican] | |
auth_endpoint = http://localhost:5000/v3 |
(String) Use this endpoint to connect to Keystone |
barbican_api_version = None |
(String) Version of the Barbican API, for example: “v1” |
barbican_endpoint = None |
(String) Use this endpoint to connect to Barbican, for example: “http://localhost:9311/“ |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
catalog_info = key-manager:barbican:public |
(String) DEPRECATED: Info to match when looking for barbican in the service catalog. Format is: separated values of the form: <service_type>:<service_name>:<endpoint_type> This option have been moved to the Castellan library |
certfile = None |
(String) PEM encoded client certificate cert file |
endpoint_template = None |
(String) DEPRECATED: Override service catalog lookup with template for barbican endpoint e.g. http://localhost:9311/v1/%(project_id)s This option have been moved to the Castellan library |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
number_of_retries = 60 |
(Integer) Number of times to retry poll for key creation completion |
os_region_name = None |
(String) DEPRECATED: Region name of this node This option have been moved to the Castellan library |
retry_delay = 1 |
(Integer) Number of seconds to wait before retrying poll for key creation completion |
timeout = None |
(Integer) Timeout value for http requests |
Configuration option = Default value | Description |
---|---|
[cells] | |
call_timeout = 60 |
(Integer) Call timeout Cell messaging module waits for response(s) to be put into the eventlet queue. This option defines the seconds waited for response from a call to a cell. Possible values:
|
capabilities = hypervisor=xenserver;kvm, os=linux;windows |
(List) Cell capabilities List of arbitrary key=value pairs defining capabilities of the current cell to be sent to the parent cells. These capabilities are intended to be used in cells scheduler filters/weighers. Possible values:
|
cell_type = compute |
(String) Type of cell When cells feature is enabled the hosts in the OpenStack Compute cloud are partitioned into groups. Cells are configured as a tree. The top-level cell’s cell_type must be set to Related options:
|
cells_config = None |
(String) Optional cells configuration Configuration file from which to read cells configuration. If given, overrides reading cells from the database. Cells store all inter-cell communication data, including user names and passwords, in the database. Because the cells data is not updated very frequently, use this option to specify a JSON file to store cells data. With this configuration, the database is no longer consulted when reloading the cells data. The file must have columns present in the Cell model (excluding common database fields and the id column). You must specify the queue connection information through a transport_url field, instead of username, password, and so on. The transport_url has the following form: rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST Possible values: The scheme can be either qpid or rabbit, the following sample shows this optional configuration:
|
db_check_interval = 60 |
(Integer) DB check interval Cell state manager updates cell status for all cells from the DB only after this particular interval time is passed. Otherwise cached status are used. If this value is 0 or negative all cell status are updated from the DB whenever a state is needed. Possible values:
|
driver = nova.cells.rpc_driver.CellsRPCDriver |
(String) DEPRECATED: Cells communication driver Driver for cell<->cell communication via RPC. This is used to setup the RPC consumers as well as to send a message to another cell. ‘nova.cells.rpc_driver.CellsRPCDriver’ starts up 2 separate servers for handling inter-cell communication via RPC. The only available driver is the RPC driver. |
enable = False |
(Boolean) Enable cell functionality When this functionality is enabled, it lets you to scale an OpenStack Compute cloud in a more distributed fashion without having to use complicated technologies like database and message queue clustering. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service, but no nova-compute services. Each child cell should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its own database server and message queue broker. Related options:
|
instance_update_num_instances = 1 |
(Integer) Instance update num instances On every run of the periodic task, nova cells manager will attempt to sync instance_updated_at_threshold number of instances. When the manager gets the list of instances, it shuffles them so that multiple nova-cells services do not attempt to sync the same instances in lockstep. Possible values:
Related options:
|
instance_update_sync_database_limit = 100 |
(Integer) Instance update sync database limit Number of instances to pull from the database at one time for a sync. If there are more instances to update the results will be paged through. Possible values:
|
instance_updated_at_threshold = 3600 |
(Integer) Instance updated at threshold Number of seconds after an instance was updated or deleted to continue to update cells. This option lets cells manager to only attempt to sync instances that have been updated recently. i.e., a threshold of 3600 means to only update instances that have modified in the last hour. Possible values:
Related options:
|
max_hop_count = 10 |
(Integer) Maximum hop count When processing a targeted message, if the local cell is not the target, a route is defined between neighbouring cells. And the message is processed across the whole routing path. This option defines the maximum hop counts until reaching the target. Possible values:
|
mute_child_interval = 300 |
(Integer) Mute child interval Number of seconds after which a lack of capability and capacity update the child cell is to be treated as a mute cell. Then the child cell will be weighed as recommend highly that it be skipped. Possible values:
|
mute_weight_multiplier = -10000.0 |
(Floating point) Mute weight multiplier Multiplier used to weigh mute children. Mute children cells are recommended to be skipped so their weight is multiplied by this negative value. Possible values:
|
name = nova |
(String) Name of the current cell This value must be unique for each cell. Name of a cell is used as its id, leaving this option unset or setting the same name for two or more cells may cause unexpected behaviour. Related options:
|
offset_weight_multiplier = 1.0 |
(Floating point) Offset weight multiplier Multiplier used to weigh offset weigher. Cells with higher weight_offsets in the DB will be preferred. The weight_offset is a property of a cell stored in the database. It can be used by a deployer to have scheduling decisions favor or disfavor cells based on the setting. Possible values:
|
reserve_percent = 10.0 |
(Floating point) Reserve percentage Percentage of cell capacity to hold in reserve, so the minimum amount of free resource is considered to be; min_free = total * (reserve_percent / 100.0) This option affects both memory and disk utilization. The primary purpose of this reserve is to ensure some space is available for users who want to resize their instance to be larger. Note that currently once the capacity expands into this reserve space this option is ignored. |
rpc_driver_queue_base = cells.intercell |
(String) RPC driver queue base When sending a message to another cell by JSON-ifying the message and making an RPC cast to ‘process_message’, a base queue is used. This option defines the base queue name to be used when communicating between cells. Various topics by message type will be appended to this. Possible values:
|
topic = cells |
(String) Topic This is the message queue topic that cells nodes listen on. It is used when the cells service is started up to configure the queue, and whenever an RPC call to the scheduler is made. Possible values:
|
Configuration option = Default value | Description |
---|---|
[cloudpipe] | |
boot_script_template = $pybasedir/nova/cloudpipe/bootscript.template |
(String) Template for cloudpipe instance boot script. Possible values:
Related options: The following options are required to configure cloudpipe-managed OpenVPN server.
|
dmz_mask = 255.255.255.0 |
(IP) Netmask to push into OpenVPN config. Possible values:
Related options:
|
dmz_net = 10.0.0.0 |
(IP) Network to push into OpenVPN config. Note: Above mentioned OpenVPN config can be found at /etc/openvpn/server.conf. Possible values:
Related options:
|
vpn_flavor = m1.tiny |
(String) Flavor for VPN instances. Possible values:
|
vpn_image_id = 0 |
(String) Image ID used when starting up a cloudpipe VPN client. An empty instance is created and configured with OpenVPN using boot_script_template. This instance would be snapshotted and stored in glance. ID of the stored image is used in ‘vpn_image_id’ to create cloudpipe VPN client. Possible values:
|
vpn_key_suffix = -vpn |
(String) Suffix to add to project name for VPN key and secgroups Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
bindir = /usr/local/bin |
(String) The directory where the Nova binaries are installed. This option is only relevant if the networking capabilities from Nova are used (see services below). Nova’s networking capabilities are targeted to be fully replaced by Neutron in the future. It is very unlikely that you need to change this option from its default value. Possible values:
|
compute_topic = compute |
(String) This is the message queue topic that the compute service ‘listens’ on. It is used when the compute service is started up to configure the queue, and whenever an RPC call to the compute service is made.
|
console_topic = console |
(String) Represents the message queue topic name used by nova-console service when communicating via the AMQP server. The Nova API uses a message queue to communicate with nova-console to retrieve a console URL for that host. Possible values
|
consoleauth_topic = consoleauth |
(String) This option allows you to change the message topic used by nova-consoleauth service when communicating via the AMQP server. Nova Console Authentication server authenticates nova consoles. Users can then access their instances through VNC clients. The Nova API service uses a message queue to communicate with nova-consoleauth to get a VNC console. Possible Values:
|
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
fatal_exception_format_errors = False |
(Boolean) DEPRECATED: When set to true, this option enables validation of exception message format. This option is used to detect errors in NovaException class when it formats error messages. If True, raise an exception; if False, use the unformatted message. This is only used for internal testing. |
host = localhost |
(String) Hostname, FQDN or IP address of this host. Must be valid within AMQP key. Possible values:
|
my_ip = 10.0.0.1 |
(String) The IP address which the host is using to connect to the management network. Possible values:
Related options:
|
notify_api_faults = False |
(Boolean) If enabled, send api.fault notifications on caught exceptions in the API service. |
notify_on_state_change = None |
(String) If set, send compute.instance.update notifications on instance state changes. Please refer to https://wiki.openstack.org/wiki/SystemUsageData for additional information on notifications. Possible values:
|
pybasedir = /usr/lib/python/site-packages/nova |
(String) The directory where the Nova python modules are installed. This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value. Possible values:
Related options:
|
report_interval = 10 |
(Integer) Seconds between nodes reporting state to datastore |
rootwrap_config = /etc/nova/rootwrap.conf |
(String) Path to the rootwrap configuration file. Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry. |
service_down_time = 60 |
(Integer) Maximum time since last check-in for up service |
state_path = $pybasedir |
(String) The top-level directory for maintaining Nova’s state. This directory is used to store Nova’s internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option Possible values:
|
tempdir = None |
(String) Explicitly specify the temporary working directory. |
use_rootwrap_daemon = False |
(Boolean) Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes. |
[workarounds] | |
disable_libvirt_livesnapshot = True |
(Boolean) Disable live snapshots when using the libvirt driver. Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem. When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process. For more information, refer to the bug report: Possible values:
|
disable_rootwrap = False |
(Boolean) Use sudo instead of rootwrap. Allow fallback to sudo for performance reasons. For more information, refer to the bug report: Possible values:
Interdependencies to other options:
|
handle_virt_lifecycle_events = True |
(Boolean) Enable handling of events emitted from compute drivers. Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored. This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature. Care should be taken when this feature is disabled and ‘sync_power_state_interval’ is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. For more information, refer to the bug report: Interdependencies to other options:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
compute_available_monitors = None |
(Multi-valued) DEPRECATED: Monitor classes available to the compute which may be specified more than once. This option is DEPRECATED and no longer used. Use setuptools entry points to list available monitor plugins. stevedore and setuptools entry points now allow a set of plugins to be specified without this config option. |
compute_driver = None |
(String) Defines which driver to use for controlling virtualization. Possible values:
|
compute_manager = nova.compute.manager.ComputeManager |
(String) DEPRECATED: Full class name for the Manager for compute |
compute_monitors = |
(List) A list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the “cpu.” namespace is assumed for backwards-compatibility. Possible values:
|
compute_stats_class = nova.compute.stats.Stats |
(String) DEPRECATED: Abstracts out managing compute host stats to pluggable class. This class manages and updates stats for the local compute host after an instance is changed. These configurable compute stats may be useful for a particular scheduler implementation. Possible values
|
console_host = socket.gethostname() |
(String) Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host. Possible values:
|
console_manager = nova.console.manager.ConsoleProxyManager |
(String) DEPRECATED: Full class name for the Manager for console proxy |
default_flavor = m1.small |
(String) DEPRECATED: Default flavor to use for the EC2 API only. The Nova API does not support a default flavor. The EC2 API is deprecated |
default_notification_level = INFO |
(String) Default notification level for outgoing notifications. |
enable_instance_password = True |
(Boolean) Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False. |
heal_instance_info_cache_interval = 60 |
(Integer) Number of seconds between instance network information cache updates |
image_cache_manager_interval = 2400 |
(Integer) Number of seconds to wait between runs of the image cache manager. Set to -1 to disable. Setting this to 0 will run at the default rate. |
image_cache_subdirectory_name = _base |
(String) Where cached images are stored under $instances_path. This is NOT the full path - just a folder name. For per-compute-host cached images, set to _base_$my_ip |
instance_build_timeout = 0 |
(Integer) Maximum time in seconds that an instance can take to build. If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period. Possible values:
|
instance_delete_interval = 300 |
(Integer) Interval in seconds for retrying failed instance file deletes. Set to -1 to disable. Setting this to 0 will run at the default rate. |
instance_usage_audit = False |
(Boolean) This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service. |
instance_usage_audit_period = month |
(String) Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset. Possible values: * period, example: |
instances_path = $state_path/instances |
(String) Specifies where instances are stored on the hypervisor’s disk. It can point to locally attached storage or a directory on NFS. Possible values:
|
max_concurrent_builds = 10 |
(Integer) Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node. Possible Values:
|
maximum_instance_delete_attempts = 5 |
(Integer) The number of times to attempt to reap an instance’s files. |
reboot_timeout = 0 |
(Integer) Time interval after which an instance is hard rebooted automatically. When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds. Possible values:
|
reclaim_instance_interval = 0 |
(Integer) Interval in seconds for reclaiming deleted instances. It takes effect only when value is greater than 0. |
rescue_timeout = 0 |
(Integer) Interval to wait before un-rescuing an instance stuck in RESCUE. Possible values:
|
resize_confirm_window = 0 |
(Integer) Automatically confirm resizes after N seconds. Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time. Possible values:
|
resume_guests_state_on_host_boot = False |
(Boolean) This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts. |
running_deleted_instance_action = reap |
(String) The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified. Possible values:
Related options:
|
running_deleted_instance_poll_interval = 1800 |
(Integer) Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If “running_deleted_instance _action” is set to “log” or “reap”, a value greater than 0 must be set. Possible values:
Related options:
|
running_deleted_instance_timeout = 0 |
(Integer) Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup. Possible values:
Related options:
|
shelved_offload_time = 0 |
(Integer) Time in seconds before a shelved instance is eligible for removing from a host. -1 never offload, 0 offload immediately when shelved |
shelved_poll_interval = 3600 |
(Integer) Interval in seconds for polling shelved instances to offload. Set to -1 to disable.Setting this to 0 will run at the default rate. |
shutdown_timeout = 60 |
(Integer) Total time to wait in seconds for an instance toperform a clean shutdown. It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly. Possible values:
|
sync_power_state_interval = 600 |
(Integer) Interval to sync power states between the database and the hypervisor. Set to -1 to disable. Setting this to 0 will run at the default rate. |
sync_power_state_pool_size = 1000 |
(Integer) Number of greenthreads available for use to sync power states. This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic. Possible values:
|
update_resources_interval = 0 |
(Integer) Interval in seconds for updating compute resources. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds. |
vif_plugging_is_fatal = True |
(Boolean) Determine if instance should boot or fail on VIF plugging timeout. Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval. This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready. Possible values:
|
vif_plugging_timeout = 300 |
(Integer) Timeout for Neutron VIF plugging event message arrival. Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see ‘vif_plugging_is_fatal’). Interdependencies to other options:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
migrate_max_retries = -1 |
(Integer) Number of times to retry live-migration before failing. Possible values:
|
[conductor] | |
manager = nova.conductor.manager.ConductorManager |
(String) DEPRECATED: Full class name for the Manager for conductor. Removal in 14.0 |
topic = conductor |
(String) Topic exchange name on which conductor nodes listen. |
use_local = False |
(Boolean) DEPRECATED: Perform nova-conductor operations locally. This legacy mode was introduced to bridge a gap during the transition to the conductor service. It no longer represents a reasonable alternative for deployers. Removal may be as early as 14.0. |
workers = None |
(Integer) Number of workers for OpenStack Conductor service. The default will be the number of CPUs available. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
config_drive_format = iso9660 |
(String) Configuration drive format Configuration drive format that will contain metadata attached to the instance when it boots. Possible values:
Related options:
|
config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 |
(String) When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don’t appear in this option. As of the Liberty release, the available versions are:
The option is in the format of a single string, with each version separated by a space. Possible values:
|
force_config_drive = False |
(Boolean) Force injection to take place on a config drive When this option is set to true configuration drive functionality will be forced enabled by default, otherwise user can still enable configuration drives via the REST API or image metadata properties. Possible values:
Related options:
|
mkisofs_cmd = genisoimage |
(String) Name or path of the tool used for ISO image creation Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is on the system path, you do not need to change the default value. To use configuration drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. Possible values:
Related options:
|
[hyperv] | |
config_drive_cdrom = False |
(Boolean) Configuration drive cdrom OpenStack can be configured to write instance metadata to a configuration drive, which is then attached to the instance before it boots. The configuration drive can be attached as a disk drive (default) or as a CD drive. Possible values:
Related options:
|
config_drive_inject_password = False |
(Boolean) Configuration drive inject password Enables setting the admin password in the configuration drive image. Related options:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
console_allowed_origins = |
(List) Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header. Possible values
|
console_public_hostname = localhost |
(String) Publicly visible name for this console host. Possible values
|
console_token_ttl = 600 |
(Integer) This option indicates the lifetime of a console auth token. A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted. |
consoleauth_manager = nova.consoleauth.manager.ConsoleAuthManager |
(String) DEPRECATED: Manager for console auth |
[mks] | |
enabled = False |
(Boolean) Enables graphical console access for virtual machines. |
mksproxy_base_url = http://127.0.0.1:6090/ |
(String) Location of MKS web console proxy The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured Possible values:
|
Configuration option = Default value | Description |
---|---|
[crypto] | |
ca_file = cacert.pem |
(String) Filename of root CA (Certificate Authority). This is a container format and includes root certificates. Possible values:
Related options:
|
ca_path = $state_path/CA |
(String) Directory path where root CA is located. Related options:
|
crl_file = crl.pem |
(String) Filename of root Certificate Revocation List (CRL). This is a list of certificates that have been revoked, and therefore, entities presenting those (revoked) certificates should no longer be trusted. Related options:
|
key_file = private/cakey.pem |
(String) Filename of a private key. Related options:
|
keys_path = $state_path/keys |
(String) Directory path where keys are located. Related options:
|
project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s |
(String) Subject for certificate for projects, %s for project, timestamp |
use_project_ca = False |
(Boolean) Option to enable/disable use of CA for each project. |
user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s |
(String) Subject for certificate for users, %s for project, user, timestamp |
[ssl] | |
ca_file = None |
(String) CA certificate file to use to verify connecting clients. |
cert_file = None |
(String) Certificate file to use when starting the server securely. |
ciphers = None |
(String) Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. |
key_file = None |
(String) Private key file to use when starting the server securely. |
version = None |
(String) SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. |
Configuration option = Default value | Description |
---|---|
[guestfs] | |
debug = False |
(Boolean) Enable/disables guestfs logging. This configures guestfs to debug messages and push them to Openstack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, “libguestfs” package must be installed. Related options: Since libguestfs access and modifies VM’s managed by libvirt, below options should be set to give access to those VM’s. * libvirt.inject_key * libvirt.inject_partition * libvirt.inject_password |
[remote_debug] | |
host = None |
(String) Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host. Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values:
|
port = None |
(Port number) Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host. Note that using the remote debug option changes how Nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values:
|
Configuration option = Default value | Description |
---|---|
[ephemeral_storage_encryption] | |
cipher = aes-xts-plain64 |
(String) Cipher-mode string to be used The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. Possible values:
|
enabled = False |
(Boolean) Enables/disables LVM ephemeral storage encryption. |
key_size = 512 |
(Integer) Encryption key length in bits The bit length of the encryption key to be used to encrypt ephemeral storage (in XTS mode only half of the bits are used for encryption key). |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
fping_path = /usr/sbin/fping |
(String) The full path to the fping binary. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
osapi_glance_link_prefix = None |
(String) This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged. Possible values:
|
[glance] | |
allowed_direct_url_schemes = |
(List) A list of url scheme that can be downloaded directly via the direct_url. Currently supported schemes: [file]. |
api_insecure = False |
(Boolean) Allow to perform insecure SSL (https) requests to glance |
api_servers = None |
(List) A list of the glance api servers endpoints available to nova. These should be fully qualified urls of the form “scheme://hostname:port[/path]” (i.e. “http://10.0.1.0:9292” or “https://my.glance.server/image”) |
debug = False |
(Boolean) Enable or disable debug logging with glanceclient. |
num_retries = 0 |
(Integer) Number of retries when uploading / downloading an image to / from glance. |
use_glance_v1 = False |
(Boolean) DEPRECATED: This flag allows reverting to glance v1 if for some reason glance v2 doesn’t work in your environment. This will only exist in Newton, and a fully working Glance v2 will be a hard requirement in Ocata.
|
verify_glance_signatures = False |
(Boolean) Require Nova to perform signature verification on each image downloaded from Glance. |
[image_file_url] | |
filesystems = |
(List) DEPRECATED: List of file systems that are configured in this file in the image_file_url:<list entry name> sections The feature to download images from glance via filesystem is not used and will be removed in the future. |
Configuration option = Default value | Description |
---|---|
[hyperv] | |
dynamic_memory_ratio = 1.0 |
(Floating point) Dynamic memory ratio Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup. Possible values:
|
enable_instance_metrics_collection = False |
(Boolean) Enable instance metrics collection Enables metrics collections for an instance by using Hyper-V’s metric APIs. Collected data can by retrieved by other apps and services, e.g.: Ceilometer. |
enable_remotefx = False |
(Boolean) Enable RemoteFX feature This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled. Instances with RemoteFX can be requested with the following flavor extra specs: os:resolution. Guest VM screen resolution size. Acceptable values: 1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160
os:monitors. Guest VM number of monitors. Acceptable values: [1, 4] - Windows / Hyper-V Server 2012 R2 [1, 8] - Windows / Hyper-V Server 2016
os:vram. Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values: 64, 128, 256, 512, 1024
|
instances_path_share = |
(String) Instances path share The name of a Windows share mapped to the “instances_path” dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same “instances_path” used locally. Possible values:
Related options:
|
limit_cpu_features = False |
(Boolean) Limit CPU features This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance. |
mounted_disk_query_retry_count = 10 |
(Integer) Mounted disk query retry count The number of times to retry checking for a disk mounted via iSCSI. During long stress runs the WMI query that is looking for the iSCSI device number can incorrectly return no data. If the query is retried the appropriate data can then be obtained. The query runs until the device can be found or the retry count is reached. Possible values:
Related options:
|
mounted_disk_query_retry_interval = 5 |
(Integer) Mounted disk query retry interval Interval between checks for a mounted iSCSI disk, in seconds. Possible values:
Related options:
|
power_state_check_timeframe = 60 |
(Integer) Power state check timeframe The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe. Possible values:
|
power_state_event_polling_interval = 2 |
(Integer) Power state event polling interval Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value. Possible values:
|
qemu_img_cmd = qemu-img.exe |
(String) qemu-img command qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: (http://qemu.weilnetz.de/) or you can install the Cloudbase OpenStack Hyper-V Compute Driver (https://cloudbase.it/openstack-hyperv-driver/) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value. Possible values:
Related options:
|
vswitch_name = None |
(String) External virtual switch name The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private). Possible values:
|
wait_soft_reboot_seconds = 60 |
(Integer) Wait soft reboot seconds Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
default_ephemeral_format = None |
(String) The default format an ephemeral_volume will be formatted with on creation. Possible values:
|
force_raw_images = True |
(Boolean) Force conversion of backing images to raw format. Possible values:
Interdependencies to other options:
|
pointer_model = usbtablet |
(String) Generic property to specify the pointer type. Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement. If set, the ‘hw_pointer_model’ image property takes precedence over this configuration option. Possible values:
Interdependencies to other options:
|
preallocate_images = none |
(String) The image preallocation mode to use. Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn’t available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation. Possible values:
|
timeout_nbd = 10 |
(Integer) Amount of time, in seconds, to wait for NBD device start up. |
use_cow_images = True |
(Boolean) Enable use of copy-on-write (cow) images. QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used. |
vcpu_pin_set = None |
(String) Defines which physical CPUs (pCPUs) can be used by instance virtual CPUs (vCPUs). Possible values:
|
virt_mkfs = [] |
(Multi-valued) Name of the mkfs commands for ephemeral device. The format is <os_type>=<mkfs command> |
Configuration option = Default value | Description |
---|---|
[ironic] | |
admin_password = None |
(String) DEPRECATED: Ironic keystone admin password. Use password instead. |
admin_tenant_name = None |
(String) DEPRECATED: Ironic keystone tenant name. Use project_name instead. |
admin_url = None |
(String) DEPRECATED: Keystone public API endpoint. Use auth_url instead. |
admin_username = None |
(String) DEPRECATED: Ironic keystone admin name. Use username instead. |
api_endpoint = http://ironic.example.org:6385/ |
(String) URL override for the Ironic API endpoint. |
api_max_retries = 60 |
(Integer) The number of times to retry when a request conflicts. If set to 0, only try once, no retries. Related options:
|
api_retry_interval = 2 |
(Integer) The number of seconds to wait before retrying the request. Related options:
|
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
timeout = None |
(Integer) Timeout value for http requests |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
fixed_range_v6 = fd00::/48 |
(String) This option determines the fixed IPv6 address block when creating a network. Please note that this option is only used when using nova-network instead of Neutron in your deployment. Possible values:
Related options:
|
gateway_v6 = None |
(String) This is the default IPv6 gateway. It is used only in the testing suite. Please note that this option is only used when using nova-network instead of Neutron in your deployment. Possible values:
Related options:
|
ipv6_backend = rfc2462 |
(String) Abstracts out IPv6 address generation to pluggable backends. nova-network can be put into dual-stack mode, so that it uses both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances acquire IPv6 global unicast addresses with the help of stateless address auto-configuration mechanism. Related options:
|
use_ipv6 = False |
(Boolean) Assign IPv6 and IPv4 addresses when creating instances. Related options:
|
Configuration option = Default value | Description |
---|---|
[key_manager] | |
api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager |
(String) The full class name of the key manager API class |
fixed_key = None |
(String) Fixed key returned by key manager, specified in hex. Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
ldap_dns_base_dn = ou=hosts,dc=example,dc=org |
(String) Base DN for DNS entries in LDAP |
ldap_dns_password = password |
(String) Password for LDAP DNS |
ldap_dns_servers = ['dns.example.org'] |
(Multi-valued) DNS Servers for LDAP DNS driver |
ldap_dns_soa_expiry = 86400 |
(String) Expiry interval (in seconds) for LDAP DNS driver Statement of Authority |
ldap_dns_soa_hostmaster = hostmaster@example.org |
(String) Hostmaster for LDAP DNS driver Statement of Authority |
ldap_dns_soa_minimum = 7200 |
(String) Minimum interval (in seconds) for LDAP DNS driver Statement of Authority |
ldap_dns_soa_refresh = 1800 |
(String) Refresh interval (in seconds) for LDAP DNS driver Statement of Authority |
ldap_dns_soa_retry = 3600 |
(String) Retry interval (in seconds) for LDAP DNS driver Statement of Authority |
ldap_dns_url = ldap://ldap.example.com:389 |
(String) URL for LDAP server which will store DNS entries |
ldap_dns_user = uid=admin,ou=people,dc=example,dc=org |
(String) User for LDAP DNS |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
remove_unused_base_images = True |
(Boolean) Should unused base images be removed? |
remove_unused_original_minimum_age_seconds = 86400 |
(Integer) Unused unresized base images younger than this will not be removed |
[libvirt] | |
checksum_base_images = False |
(Boolean) DEPRECATED: Write a checksum for files in _base to disk The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level. |
checksum_interval_seconds = 3600 |
(Integer) DEPRECATED: How frequently to checksum base images The image cache no longer periodically calculates checksums of stored images. Data integrity can be checked at the block or filesystem level. |
connection_uri = |
(String) Overrides the default libvirt URI of the chosen virtualization type. If set, Nova will use this URI to connect to libvirt. Possible values:
Related options:
|
cpu_mode = None |
(String) Is used to set the CPU mode an instance should have. If virt_type=”kvm|qemu”, it will default to “host-model”, otherwise it will default to “none”. Possible values:
Related options:
|
cpu_model = None |
(String) Set the name of the libvirt CPU model the instance should use. Possible values:
Related options:
|
disk_cachemodes = |
(List) Specific cachemodes to use for different disk types e.g: file=directsync,block=none |
disk_prefix = None |
(String) Override the default disk prefix for the devices attached to an instance. If set, this is used to identify a free disk device name for a bus. Possible values:
Related options:
|
enabled_perf_events = |
(List) This is a performance event list which could be used as monitor. These events will be passed to libvirt domain xml while creating a new instances. Then event statistics data can be collected from libvirt. The minimum libvirt version is 2.0.0. For more information about Performance monitoring events, refer https://libvirt.org/formatdomain.html#elementsPerf .
|
gid_maps = |
(List) List of guid targets and ranges.Syntax is guest-gid:host-gid:countMaximum of 5 allowed. |
hw_disk_discard = None |
(String) Discard option for nova managed disks. Need Libvirt(1.0.6) Qemu1.5 (raw format) Qemu1.6(qcow2 format) |
hw_machine_type = None |
(List) For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the “virsh capabilities”command. The format of the value for this config option is host-arch=machine-type. For example: x86_64=machinetype1,armv7l=machinetype2 |
image_info_filename_pattern = $instances_path/$image_cache_subdirectory_name/%(image)s.info |
(String) DEPRECATED: Allows image information files to be stored in non-standard locations Image info files are no longer used by the image cache |
images_rbd_ceph_conf = |
(String) Path to the ceph configuration file to use |
images_rbd_pool = rbd |
(String) The RADOS pool in which rbd volumes are stored |
images_type = default |
(String) VM Images format. If default is specified, then use_cow_images flag is used instead of this one. |
images_volume_group = None |
(String) LVM Volume Group that is used for VM images, when you specify images_type=lvm. |
inject_key = False |
(Boolean) Allow the injection of an SSH key at boot time. There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service. Related options:
|
inject_partition = -2 |
(Integer) Determines the way how the file system is chosen to inject data into it. libguestfs will be used a first solution to inject data. If that’s not available on the host, the image will be locally mounted on the host as a fallback solution. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won’t be boot. Possible values:
Related options:
|
inject_password = False |
(Boolean) Allow the injection of an admin password for instance only at There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won’t be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume. Possible values:
Related options:
|
iscsi_iface = None |
(String) The iSCSI transport iface to use to connect to target in case offload support is desired. Default format is of the form <transport_name>.<hwaddress> where <transport_name> is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name. |
iser_use_multipath = False |
(Boolean) Use multipath connection of the iSER volume |
mem_stats_period_seconds = 10 |
(Integer) A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics. |
realtime_scheduler_priority = 1 |
(Integer) In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99) |
remove_unused_resized_minimum_age_seconds = 3600 |
(Integer) Unused resized base images younger than this will not be removed |
rescue_image_id = None |
(String) The ID of the image to boot from to rescue data from a corrupted instance. If the rescue REST API operation doesn’t provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used. Possible values:
Related options:
|
rescue_kernel_id = None |
(String) The ID of the kernel (AKI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon‘s AMI/AKI/ARI image format is used for the rescue image. Possible values:
Related options:
|
rescue_ramdisk_id = None |
(String) The ID of the RAM disk (ARI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon‘s AMI/AKI/ARI image format is used for the rescue image. Possible values:
Related options:
|
rng_dev_path = None |
(String) A path to a device that will be used as source of entropy on the host. Permitted options are: /dev/random or /dev/hwrng |
snapshot_compression = False |
(Boolean) Compress snapshot images when possible. This currently applies exclusively to qcow2 images |
snapshot_image_format = None |
(String) Snapshot image format. Defaults to same as source image |
snapshots_directory = $instances_path/snapshots |
(String) Location where libvirt driver will store snapshots before uploading them to image service |
sparse_logical_volumes = False |
(Boolean) Create sparse logical volumes (with virtualsize) if this flag is set to True. |
sysinfo_serial = auto |
(String) The data source used to the populate the host “serial” UUID exposed to guest in the virtual BIOS. |
uid_maps = |
(List) List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5 allowed. |
use_usb_tablet = True |
(Boolean) DEPRECATED: Enable a mouse cursor within a graphical VNC or SPICE sessions. This will only be taken into account if the VM is fully virtualized and VNC and/or SPICE is enabled. If the node doesn’t support a graphical framebuffer, then it is valid to set this to False. Related options:
|
use_virtio_for_bridges = True |
(Boolean) Use virtio for bridge interfaces with KVM/QEMU |
virt_type = kvm |
(String) Describes the virtualization type (or so called domain type) libvirt should use. The choice of this type must match the underlying virtualization strategy you have chosen for this host. Possible values:
Related options:
|
volume_clear = zero |
(String) Method used to wipe old volumes. |
volume_clear_size = 0 |
(Integer) Size in MiB to wipe at start of old volumes. 0 => all |
volume_use_multipath = False |
(Boolean) Use multipath connection of the iSCSI or FC volume |
vzstorage_cache_path = None |
(String) Path to the SSD cache file. You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client’s SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility. This option defines the path which should include “%(cluster_name)s” template to separate caches from multiple shares.
|
vzstorage_log_path = /var/log/pstorage/%(cluster_name)s/nova.log.gz |
(String) Path to vzstorage client log. This option defines the log of cluster operations, it should include “%(cluster_name)s” template to separate logs from multiple shares.
|
vzstorage_mount_group = qemu |
(String) Mount owner group name. This option defines the owner group of Vzstorage cluster mountpoint.
|
vzstorage_mount_opts = |
(List) Extra mount options for pstorage-mount For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: “[‘-v’, ‘-R’, ‘500’]” Shouldn’t include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options.
|
vzstorage_mount_perms = 0770 |
(String) Mount access mode. This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0’s.
|
vzstorage_mount_point_base = $state_path/mnt |
(String) Directory where the Virtuozzo Storage clusters are mounted on the compute node. This option defines non-standard mountpoint for Vzstorage cluster.
|
vzstorage_mount_user = stack |
(String) Mount owner user name. This option defines the owner user of Vzstorage cluster mountpoint.
|
wait_soft_reboot_seconds = 120 |
(Integer) Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
live_migration_retry_count = 30 |
(Integer) Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continously sends live-migration request to same host leading to concurrent request to iptables. Possible values:
|
max_concurrent_live_migrations = 1 |
(Integer) Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment. Possible values:
|
[libvirt] | |
live_migration_bandwidth = 0 |
(Integer) Maximum bandwidth(in MiB/s) to be used during migration. If set to 0, will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details |
live_migration_completion_timeout = 800 |
(Integer) Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts. - Mutable This option can be changed without restarting. |
live_migration_downtime = 500 |
(Integer) Maximum permitted downtime, in milliseconds, for live migration switchover. Will be rounded up to a minimum of 100ms. Use a large value if guest liveness is unimportant. |
live_migration_downtime_delay = 75 |
(Integer) Time to wait, in seconds, between each step increase of the migration downtime. Minimum delay is 10 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device |
live_migration_downtime_steps = 10 |
(Integer) Number of incremental steps to reach max downtime value. Will be rounded up to a minimum of 3 steps |
live_migration_inbound_addr = None |
(String) Live migration target ip or hostname (if this option is set to None, which is the default, the hostname of the migration target compute node will be used) |
live_migration_permit_auto_converge = False |
(Boolean) This option allows nova to start live migration with auto converge on. Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use. Auto converge requires libvirt>=1.2.3 and QEMU>=1.6.0. Related options:
|
live_migration_permit_post_copy = False |
(Boolean) This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0. When permitted, post-copy mode will be automatically activated if a live-migration memory copy iteration does not make percentage increase of at least 10% over the last iteration. The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete. When using post-copy mode, if the source and destination hosts loose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide. Related options:
|
live_migration_progress_timeout = 150 |
(Integer) Time to wait, in seconds, for migration to make forward progress in transferring data before aborting the operation. Set to 0 to disable timeouts. - Mutable This option can be changed without restarting. |
live_migration_tunnelled = False |
(Boolean) Whether to use tunnelled migration, where migration data is transported over the libvirtd connection. If True, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. |
live_migration_uri = None |
(String) Override the default libvirt live migration target URI (which is dependent on virt_type) (any included “%s” is replaced with the migration target hostname) |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
metadata_cache_expiration = 15 |
(Integer) This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect. |
metadata_host = $my_ip |
(String) This option determines the IP address for the network metadata API server. Possible values:
Related options:
|
metadata_listen = 0.0.0.0 |
(String) The IP address on which the metadata API will listen. |
metadata_listen_port = 8775 |
(Port number) The port on which the metadata API will listen. |
metadata_manager = nova.api.manager.MetadataManager |
(String) DEPRECATED: OpenStack metadata service manager |
metadata_port = 8775 |
(Port number) This option determines the port used for the metadata API server. Related options:
|
metadata_workers = None |
(Integer) Number of workers for metadata service. The default will be the number of CPUs available. |
vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData |
(String) DEPRECATED: When returning instance metadata, this is the class that is used for getting vendor metadata when that class isn’t specified in the individual request. The value should be the full dot-separated path to the class to use. Possible values:
|
vendordata_dynamic_connect_timeout = 5 |
(Integer) Maximum wait time for an external REST service to connect. Possible values:
Related options:
|
vendordata_dynamic_read_timeout = 5 |
(Integer) Maximum wait time for an external REST service to return data once connected. Possible values:
Related options:
|
vendordata_dynamic_ssl_certfile = |
(String) Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against. Possible values:
Related options:
|
vendordata_dynamic_targets = |
(List) A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>. The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference. |
vendordata_jsonfile_path = None |
(String) Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary. Possible values:
|
vendordata_providers = |
(List) A list of vendordata providers. vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. There are currently two supported providers: StaticJSON and DynamicJSON. StaticJSON reads a JSON file configured by the flag vendordata_jsonfile_path and places the JSON from that file into vendor_data.json and vendor_data2.json. DynamicJSON is configured via the vendordata_dynamic_targets flag, which is documented separately. For each of the endpoints specified in that flag, a section is added to the vendor_data2.json. For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference. Possible values:
Related options:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allow_same_net_traffic = True |
(Boolean) Determine whether to allow network traffic from same network. When set to true, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project. This option only applies when using the Possible values:
Interdependencies to other options:
|
auto_assign_floating_ip = False |
(Boolean) Autoassigning floating IP to VM When set to True, floating IP is auto allocated and associated to the VM upon creation. |
cnt_vpn_clients = 0 |
(Integer) This option represents the number of IP addresses to reserve at the top of the address range for VPN clients. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’. Possible values:
Related options:
|
create_unique_mac_address_attempts = 5 |
(Integer) This option determines how many times nova-network will attempt to create a unique MAC address before giving up and raising a VirtualInterfaceMacAddressException error. Possible values:
Related options:
|
default_access_ip_network_name = None |
(String) Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen. Possible values:
|
default_floating_pool = nova |
(String) Default pool for floating IPs. This option specifies the default floating IP pool for allocating floating IPs. While allocating a floating ip, users can optionally pass in the name of the pool they want to allocate from, otherwise it will be pulled from the default pool. If this option is not set, then ‘nova’ is used as default floating pool. Possible values:
|
defer_iptables_apply = False |
(Boolean) Whether to batch up the application of IPTables rules during a host restart and apply all at the end of the init phase. |
dhcp_domain = novalocal |
(String) This option allows you to specify the domain for the DHCP server. Possible values:
Related options:
|
dhcp_lease_time = 86400 |
(Integer) The lifetime of a DHCP lease, in seconds. The default is 86400 (one day). Possible values:
|
dhcpbridge = $bindir/nova-dhcpbridge |
(String) The location of the binary nova-dhcpbridge. By default it is the binary named ‘nova-dhcpbridge’ that is installed with all the other nova binaries. Possible values:
|
dhcpbridge_flagfile = ['/etc/nova/nova-dhcpbridge.conf'] |
(Multi-valued) This option is a list of full paths to one or more configuration files for dhcpbridge. In most cases the default path of ‘/etc/nova/nova-dhcpbridge.conf’ should be sufficient, but if you have special needs for configuring dhcpbridge, you can change or add to this list. Possible values
|
dns_server = [] |
(Multi-valued) Despite the singular form of the name of this option, it is actually a list of zero or more server addresses that dnsmasq will use for DNS nameservers. If this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use the servers specified in this option. If the option use_network_dns_servers is True, the dns1 and dns2 servers from the network will be appended to this list, and will be used as DNS servers, too. Possible values:
Related options:
|
dns_update_periodic_interval = -1 |
(Integer) This option determines the time, in seconds, to wait between refreshing DNS entries for the network. Possible values:
Related options:
|
dnsmasq_config_file = |
(String) The path to the custom dnsmasq configuration file, if any. Possible values:
|
ebtables_exec_attempts = 3 |
(Integer) This option determines the number of times to retry ebtables commands before giving up. The minimum number of retries is 1. Possible values:
Related options:
|
ebtables_retry_interval = 1.0 |
(Floating point) This option determines the time, in seconds, that the system will sleep in between ebtables retries. Note that each successive retry waits a multiple of this value, so for example, if this is set to the default of 1.0 seconds, and ebtables_exec_attempts is 4, after the first failure, the system will sleep for 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and after the third failure it will sleep 3 * 1.0 seconds. Possible values:
Related options:
|
firewall_driver = None |
(String) Firewall driver to use with This option only applies when using the If unset (the default), this will default to the hypervisor-specified default driver. Possible values:
Interdependencies to other options:
|
fixed_ip_disassociate_timeout = 600 |
(Integer) This is the number of seconds to wait before disassociating a deallocated fixed IP address. This is only used with the nova-network service, and has no effect when using neutron for networking. Possible values:
Related options:
|
flat_injected = False |
(Boolean) This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware and xenapi virt drivers to control whether network information is injected into a VM. |
flat_interface = None |
(String) This option is the name of the virtual interface of the VM on which the bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt for the bridge interface name. Possible values:
|
flat_network_bridge = None |
(String) This option determines the bridge used for simple network interfaces when no bridge is specified in the VM creation request. Please note that this option is only used when using nova-network instead of Neutron in your deployment. Possible values:
Related options:
|
flat_network_dns = 8.8.4.4 |
(String) This is the address of the DNS server for a simple network. If this option is not specified, the default of ‘8.8.4.4’ is used. Please note that this option is only used when using nova-network instead of Neutron in your deployment. Possible values:
Related options:
|
floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver |
(String) Full class name for the DNS Manager for floating IPs. This option specifies the class of the driver that provides functionality to manage DNS entries associated with floating IPs. When a user adds a DNS entry for a specified domain to a floating IP, nova will add a DNS entry using the specified floating DNS driver. When a floating IP is deallocated, its DNS entry will automatically be deleted. Possible values:
|
force_dhcp_release = True |
(Boolean) When this option is True, a call is made to release the DHCP for the instance when that instance is terminated. Related options:
|
force_snat_range = [] |
(Multi-valued) This is a list of zero or more IP ranges that traffic from the routing_source_ip will be SNATted to. If the list is empty, then no SNAT rules are created. Possible values:
Related options:
|
forward_bridge_interface = ['all'] |
(Multi-valued) One or more interfaces that bridges can forward traffic to. If any of the items in this list is the special keyword ‘all’, then all traffic will be forwarded. Possible values:
|
gateway = None |
(String) This is the default IPv4 gateway. It is used only in the testing suite. Please note that this option is only used when using nova-network instead of Neutron in your deployment. Possible values:
Related options:
|
injected_network_template = $pybasedir/nova/virt/interfaces.template |
(String) Template file for injected network |
instance_dns_domain = |
(String) If specified, Nova checks if the availability_zone of every instance matches what the database says the availability_zone should be for the specified dns_domain. |
instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver |
(String) Full class name for the DNS Manager for instance IPs. This option specifies the class of the driver that provides functionality to manage DNS entries for instances. On instance creation, nova will add DNS entries for the instance name and id, using the specified instance DNS driver and domain. On instance deletion, nova will remove the DNS entries. Possible values:
|
iptables_bottom_regex = |
(String) This expression, if defined, will select any matching iptables rules and place them at the bottom when applying metadata changes to the rules. Possible values:
Related options:
|
iptables_drop_action = DROP |
(String) By default, packets that do not pass the firewall are DROPped. In many cases, though, an operator may find it more useful to change this from DROP to REJECT, so that the user issuing those packets may have a better idea as to what’s going on, or LOGDROP in order to record the blocked traffic before DROPping. Possible values:
|
iptables_top_regex = |
(String) This expression, if defined, will select any matching iptables rules and place them at the top when applying metadata changes to the rules. Possible values:
Related options:
|
l3_lib = nova.network.l3.LinuxNetL3 |
(String) This option allows you to specify the L3 management library to be used. Possible values:
Related options:
|
linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver |
(String) This is the class used as the ethernet device driver for linuxnet bridge operations. The default value should be all you need for most cases, but if you wish to use a customized class, set this option to the full dot-separated import path for that class. Possible values:
|
linuxnet_ovs_integration_bridge = br-int |
(String) The name of the Open vSwitch bridge that is used with linuxnet when connecting with Open vSwitch.” Possible values:
|
multi_host = False |
(Boolean) Default value for multi_host in networks. Also, if set, some rpc network calls will be sent directly to host. |
network_allocate_retries = 0 |
(Integer) Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails. Possible values:
|
network_driver = nova.network.linux_net |
(String) Driver to use for network creation |
network_manager = nova.network.manager.VlanManager |
(String) Full class name for the Manager for network |
network_size = 256 |
(Integer) This option determines the number of addresses in each private subnet. Please note that this option is only used when using nova-network instead of Neutron in your deployment. Possible values:
Related options:
|
network_topic = network |
(String) The topic network nodes listen on |
networks_path = $state_path/networks |
(String) The location where the network configuration files will be kept. The default is the ‘networks’ directory off of the location where nova’s Python module is installed. Possible values
|
num_networks = 1 |
(Integer) This option represents the number of networks to create if not explicitly specified when the network is created. The only time this is used is if a CIDR is specified, but an explicit network_size is not. In that case, the subnets are created by diving the IP address space of the CIDR by num_networks. The resulting subnet sizes cannot be larger than the configuration option network_size; in that event, they are reduced to network_size, and a warning is logged. Please note that this option is only used when using nova-network instead of Neutron in your deployment. Possible values:
Related options:
|
ovs_vsctl_timeout = 120 |
(Integer) This option represents the period of time, in seconds, that the ovs_vsctl calls will wait for a response from the database before timing out. A setting of 0 means that the utility should wait forever for a response. Possible values:
|
public_interface = eth0 |
(String) This is the name of the network interface for public IP addresses. The default is ‘eth0’. Possible values:
|
routing_source_ip = $my_ip |
(String) This is the public IP address of the network host. It is used when creating a SNAT rule. Possible values:
Related options:
|
send_arp_for_ha = False |
(Boolean) When True, when a device starts up, and upon binding floating IP addresses, arp messages will be sent to ensure that the arp caches on the compute hosts are up-to-date. Related options:
|
send_arp_for_ha_count = 3 |
(Integer) When arp messages are configured to be sent, they will be sent with the count set to the value of this option. Of course, if this is set to zero, no arp messages will be sent. Possible values:
Related options:
|
share_dhcp_address = False |
(Boolean) DEPRECATED: THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK. If True in multi_host mode, all compute hosts share the same dhcp address. The same IP address used for DHCP will be added on each nova-network node which is only visible to the VMs on the same host. The use of this configuration has been deprecated and may be removed in any release after Mitaka. It is recommended that instead of relying on this option, an explicit value should be passed to ‘create_networks()’ as a keyword argument with the name ‘share_address’. |
teardown_unused_network_gateway = False |
(Boolean) Determines whether unused gateway devices, both VLAN and bridge, are deleted if the network is in nova-network VLAN mode and is multi-hosted. Related options:
|
update_dns_entries = False |
(Boolean) When this option is True, whenever a DNS entry must be updated, a fanout cast message is sent to all network hosts to update their DNS entries in multi-host mode. Related options:
|
use_network_dns_servers = False |
(Boolean) When this option is set to True, the dns1 and dns2 servers for the network specified by the user on boot will be used for DNS, as well as any specified in the dns_server option. Related options:
|
use_neutron = False |
(Boolean) Whether to use Neutron or Nova Network as the back end for networking. Defaults to False (indicating Nova network).Set to True to use neutron. |
use_neutron_default_nets = False |
(Boolean) When True, the TenantNetworkController will query the Neutron API to get the default networks to use. Related options:
|
use_single_default_gateway = False |
(Boolean) When set to True, only the firt nic of a VM will get its default gateway from the DHCP server. |
vlan_interface = None |
(String) This option is the name of the virtual interface of the VM on which the VLAN bridge will be built. While it was originally designed to be used only by nova-network, it is also used by libvirt and xenapi for the bridge interface name. Please note that this setting will be ignored in nova-network if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’. Possible values:
|
vlan_start = 100 |
(Integer) This is the VLAN number used for private networks. Note that the when creating the networks, if the specified number has already been assigned, nova-network will increment this number until it finds an available VLAN. Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’. Possible values:
Related options:
|
[libvirt] | |
remote_filesystem_transport = ssh |
(String) Use ssh or rsync transport for creating, copying, removing files on the remote host. |
[os_vif_linux_bridge] | |
flat_interface = None |
(String) FlatDhcp will bridge into this interface if set |
forward_bridge_interface = ['all'] |
(Multi-valued) An interface that bridges can forward to. If this is set to all then all traffic will be forwarded. Can be specified multiple times. |
iptables_bottom_regex = |
(String) Regular expression to match the iptables rule that should always be on the bottom. |
iptables_drop_action = DROP |
(String) The table that iptables to jump to when a packet is to be dropped. |
iptables_top_regex = |
(String) Regular expression to match the iptables rule that should always be on the top. |
network_device_mtu = 1500 |
(Integer) MTU setting for network interface. |
use_ipv6 = False |
(Boolean) Use IPv6 |
vlan_interface = None |
(String) VLANs will bridge into this interface if set |
[os_vif_ovs] | |
network_device_mtu = 1500 |
(Integer) MTU setting for network interface. |
ovs_vsctl_timeout = 120 |
(Integer) Amount of time, in seconds, that ovs_vsctl should wait for a response from the database. 0 is to wait forever. |
[vif_plug_linux_bridge_privileged] | |
capabilities = [] |
(Unknown) List of Linux capabilities retained by the privsep daemon. |
group = None |
(String) Group that the privsep daemon should run as. |
helper_command = None |
(String) Command to invoke to start the privsep daemon if not using the “fork” method. If not specified, a default is generated using “sudo privsep-helper” and arguments designed to recreate the current configuration. This command must accept suitable –privsep_context and –privsep_sock_path arguments. |
user = None |
(String) User that the privsep daemon should run as. |
[vif_plug_ovs_privileged] | |
capabilities = [] |
(Unknown) List of Linux capabilities retained by the privsep daemon. |
group = None |
(String) Group that the privsep daemon should run as. |
helper_command = None |
(String) Command to invoke to start the privsep daemon if not using the “fork” method. If not specified, a default is generated using “sudo privsep-helper” and arguments designed to recreate the current configuration. This command must accept suitable –privsep_context and –privsep_sock_path arguments. |
user = None |
(String) User that the privsep daemon should run as. |
[vmware] | |
vlan_interface = vmnic0 |
(String) This option specifies the physical ethernet adapter name for VLAN networking. Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic. Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
neutron_default_tenant_id = default |
(String) Tenant ID for getting the default network from Neutron API (also referred in some places as the ‘project ID’) to use. Related options:
|
[neutron] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
extension_sync_interval = 600 |
(Integer) Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the next time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait. |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
metadata_proxy_shared_secret = |
(String) This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the ‘X-Metadata-Provider-Signature’ header must be supplied in the request. Related options:
|
ovs_bridge = br-int |
(String) Specifies the name of an integration bridge interface used by OpenvSwitch. This option is used only if Neutron does not specify the OVS bridge name. Possible values:
|
region_name = RegionOne |
(String) Region name for connecting to Neutron in admin context. This option is used in multi-region setups. If there are two Neutron servers running in two regions in two different machines, then two services need to be created in Keystone with two different regions and associate corresponding endpoints to those services. When requests are made to Keystone, the Keystone service uses the region_name to determine the region the request is coming from. |
service_metadata_proxy = False |
(Boolean) When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the ‘X-Instance-ID’ header. Related options:
|
timeout = None |
(Integer) Timeout value for http requests |
url = http://127.0.0.1:9696 |
(URI) This option specifies the URL for connecting to Neutron. Possible values:
|
Configuration option = Default value | Description |
---|---|
[privsep_osbrick] | |
capabilities = [] |
(Unknown) List of Linux capabilities retained by the privsep daemon. |
group = None |
(String) Group that the privsep daemon should run as. |
helper_command = None |
(String) Command to invoke to start the privsep daemon if not using the “fork” method. If not specified, a default is generated using “sudo privsep-helper” and arguments designed to recreate the current configuration. This command must accept suitable –privsep_context and –privsep_sock_path arguments. |
user = None |
(String) User that the privsep daemon should run as. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
pci_alias = [] |
(Multi-valued) An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra_spec for a flavor, without needing to repeat all the PCI property requirements. Possible Values:
|
pci_passthrough_whitelist = [] |
(Multi-valued) White list of PCI devices available to VMs. Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
periodic_enable = True |
(Boolean) Enable periodic tasks |
periodic_fuzzy_delay = 60 |
(Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allow_instance_snapshots = True |
(Boolean) Operators can turn off the ability for a user to take snapshots of their instances by setting this option to False. When disabled, any attempt to take a snapshot will result in a HTTP 400 response (“Bad Request”). |
allow_resize_to_same_host = False |
(Boolean) Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. |
max_age = 0 |
(Integer) The number of seconds between subsequent usage refreshes. This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues. Note that quotas are not updated on a periodic task, they will update on a new reservation if max_age has passed since the last reservation. Possible values:
|
max_local_block_devices = 3 |
(Integer) Maximum number of devices that will result in a local image being created on the hypervisor node. A negative number means unlimited. Setting max_local_block_devices to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of –image being used, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail. Possible values:
|
osapi_compute_unique_server_name_scope = |
(String) Sets the scope of the check for unique instance names. The default doesn’t check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an ‘’InstanceExists’’ error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don’t have to distinguish among instances with the same name by their IDs. Possible values:
|
osapi_max_limit = 1000 |
(Integer) As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option. |
password_length = 12 |
(Integer) Length of generated instance admin passwords. |
reservation_expire = 86400 |
(Integer) The number of seconds until a reservation expires. It represents the time period for invalidating quota reservations. Possible values:
|
resize_fs_using_block_device = False |
(Boolean) If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw). |
until_refresh = 0 |
(Integer) The count of reservations until usage is refreshed. This defaults to 0 (off) to avoid additional load but it is useful to turn on to help keep quota usage up-to-date and reduce the impact of out of sync usage issues. Possible values:
|
Configuration option = Default value | Description |
---|---|
[libvirt] | |
quobyte_client_cfg = None |
(String) Path to a Quobyte Client configuration file. |
quobyte_mount_point_base = $state_path/mnt |
(String) Directory where the Quobyte volume is mounted on the compute node |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
bandwidth_poll_interval = 600 |
(Integer) Interval to pull network bandwidth usage info. Not supported on all hypervisors. Set to -1 to disable. Setting this to 0 will run at the default rate. |
enable_network_quota = False |
(Boolean) DEPRECATED: This option is used to enable or disable quota checking for tenant networks. Related options:
|
quota_cores = 20 |
(Integer) The number of instance cores or VCPUs allowed per project. Possible values:
|
quota_driver = nova.quota.DbQuotaDriver |
(String) DEPRECATED: Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks. Possible values:
|
quota_fixed_ips = -1 |
(Integer) The number of fixed IPs allowed per project (this should be at least the number of instances allowed). Unlike floating IPs, fixed IPs are allocated dynamically by the network component when instances boot up. Possible values:
|
quota_floating_ips = 10 |
(Integer) The number of floating IPs allowed per project. Floating IPs are not allocated to instances by default. Users need to select them from the pool configured by the OpenStack administrator to attach to their instances. Possible values:
|
quota_injected_file_content_bytes = 10240 |
(Integer) The number of bytes allowed per injected file. Possible values:
|
quota_injected_file_path_length = 255 |
(Integer) The maximum allowed injected file path length. Possible values:
|
quota_injected_files = 5 |
(Integer) The number of injected files allowed. It allow users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted. Binary or zip files won’t work. During file injection, any existing files that match specified files are renamed to include .bak extension appended with a timestamp. Possible values:
|
quota_instances = 10 |
(Integer) The number of instances allowed per project. Possible Values
|
quota_key_pairs = 100 |
(Integer) The maximum number of key pairs allowed per user. Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project. Possible values:
|
quota_metadata_items = 128 |
(Integer) The number of metadata items allowed per instance. User can associate metadata while instance creation in the form of key-value pairs. Possible values:
|
quota_networks = 3 |
(Integer) DEPRECATED: This option controls the number of private networks that can be created per project (or per tenant). Related options:
|
quota_ram = 51200 |
(Integer) The number of megabytes of instance RAM allowed per project. Possible values:
|
quota_security_group_rules = 20 |
(Integer) The number of security rules per security group. The associated rules in each security group control the traffic to instances in the group. Possible values:
|
quota_security_groups = 10 |
(Integer) The number of security groups per project. Possible values:
|
quota_server_group_members = 10 |
(Integer) Add quota values to constrain the number of servers per server group. Possible values:
|
quota_server_groups = 10 |
(Integer) Add quota values to constrain the number of server groups per project. Server group used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota. Possible values:
|
[cells] | |
bandwidth_update_interval = 600 |
(Integer) Bandwidth update interval Seconds between bandwidth usage cache updates for cells. Possible values:
|
Configuration option = Default value | Description |
---|---|
[rdp] | |
enabled = False |
(Boolean) Enable Remote Desktop Protocol (RDP) related features. Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V. Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform. Related options:
|
html5_proxy_base_url = http://127.0.0.1:6083/ |
(String) The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance. An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack. An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect Possible values:
Related options:
|
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
image_decryption_dir = /tmp |
(String) DEPRECATED: Parent directory for tempdir used for image decryption EC2 API related options are not supported. |
s3_access_key = notchecked |
(String) DEPRECATED: Access key to use S3 server for images EC2 API related options are not supported. |
s3_affix_tenant = False |
(Boolean) DEPRECATED: Whether to affix the tenant id to the access key when downloading from S3 EC2 API related options are not supported. |
s3_host = $my_ip |
(String) DEPRECATED: Hostname or IP for OpenStack to use when accessing the S3 API EC2 API related options are not supported. |
s3_port = 3333 |
(Port number) DEPRECATED: Port used when accessing the S3 API. It should be in the range of 1 - 65535 EC2 API related options are not supported. |
s3_secret_key = notchecked |
(String) DEPRECATED: Secret key to use for S3 server for images EC2 API related options are not supported. |
s3_use_ssl = False |
(Boolean) DEPRECATED: Whether to use SSL when talking to S3 EC2 API related options are not supported. |
Configuration option = Default value | Description |
---|---|
[serial_console] | |
base_url = ws://127.0.0.1:6083/ |
(String) The URL an end user would use to connect to the The Possible values:
Services which consume this:
Interdependencies to other options:
|
enabled = False |
(Boolean) Enable the serial console feature. In order to use this feature, the service Possible values:
Services which consume this:
Interdependencies to other options:
|
port_range = 10000:20000 |
(String) A range of TCP ports a guest can use for its backend. Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won’t get launched. Possible values: Each string which passes the regex Services which consume this:
Interdependencies to other options:
|
proxyclient_address = 127.0.0.1 |
(String) The IP address to which proxy clients (like This is typically the IP address of the host of a Possible values:
Services which consume this:
Interdependencies to other options:
|
serialproxy_host = 0.0.0.0 |
(String) The IP address which is used by the The Possible values:
Services which consume this:
Interdependencies to other options:
|
serialproxy_port = 6083 |
(Port number) The port number which is used by the The Possible values:
Services which consume this:
Interdependencies to other options:
|
Configuration option = Default value | Description |
---|---|
[spice] | |
agent_enabled = True |
(Boolean) Enable the spice guest agent support. |
enabled = False |
(Boolean) Enable spice related features. |
html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html |
(String) Location of spice HTML5 console proxy, in the form “http://127.0.0.1:6082/spice_auto.html“ |
html5proxy_host = 0.0.0.0 |
(String) Host on which to listen for incoming requests |
html5proxy_port = 6082 |
(Port number) Port on which to listen for incoming requests |
keymap = en-us |
(String) Keymap for spice |
server_listen = 127.0.0.1 |
(String) IP address on which instance spice server should listen |
server_proxyclient_address = 127.0.0.1 |
(String) The address to which proxy clients (like nova-spicehtml5proxy) should connect |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
fake_network = False |
(Boolean) This option is used mainly in testing to avoid calls to the underlying network utilities. |
monkey_patch = False |
(Boolean) Determine if monkey patching should be applied. Related options:
|
monkey_patch_modules = nova.compute.api:nova.notifications.notify_decorator |
(List) List of modules/decorators to monkey patch. This option allows you to patch a decorator for all functions in specified modules. Possible values:
Related options:
|
Configuration option = Default value | Description |
---|---|
[trusted_computing] | |
attestation_api_url = /OpenAttestationWebServices/V1.0 |
(String) The URL on the attestation server to use. See the attestation_server help text for more information about host verification. This value must be just that path portion of the full URL, as it will be joined to the host specified in the attestation_server option. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.
|
attestation_auth_blob = None |
(String) Attestation servers require a specific blob that is used to authenticate. The content and format of the blob are determined by the particular attestation server being used. There is no default value; you must supply the value as specified by your attestation service. See the attestation_server help text for more information about host verification. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.
|
attestation_auth_timeout = 60 |
(Integer) This value controls how long a successful attestation is cached. Once this period has elapsed, a new attestation request will be made. See the attestation_server help text for more information about host verification. The value is in seconds. Valid values must be positive integers for any caching; setting this to zero or a negative value will result in calls to the attestation_server for every request, which may impact performance. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.
|
attestation_insecure_ssl = False |
(Boolean) When set to True, the SSL certificate verification is skipped for the attestation service. See the attestation_server help text for more information about host verification. Valid values are True or False. The default is False. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.
|
attestation_port = 8443 |
(String) The port to use when connecting to the attestation server. See the attestation_server help text for more information about host verification. Valid values are strings, not integers, but must be digits only. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.
|
attestation_server = None |
(String) The host to use as the attestation server. Cloud computing pools can involve thousands of compute nodes located at different geographical locations, making it difficult for cloud providers to identify a node’s trustworthiness. When using the Trusted filter, users can request that their VMs only be placed on nodes that have been verified by the attestation server specified in this option. The value is a string, and can be either an IP address or FQDN. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.
|
attestation_server_ca_file = None |
(String) The absolute path to the certificate to use for authentication when connecting to the attestation server. See the attestation_server help text for more information about host verification. The value is a string, and must point to a file that is readable by the scheduler. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘TrustedFilter’ filter is enabled.
|
Configuration option = Default value | Description |
---|---|
[cells] | |
scheduler = nova.cells.scheduler.CellsScheduler |
(String) Cells scheduler The class of the driver used by the cells scheduler. This should be the full Python path to the class to be used. If nothing is specified in this option, the CellsScheduler is used. |
[upgrade_levels] | |
baseapi = None |
(String) Set a version cap for messages sent to the base api in any service |
cells = None |
(String) Cells version Cells client-side RPC API version. Use this option to set a version cap for messages sent to local cells services. Possible values:
Services which consume this:
Related options:
|
cert = None |
(String) Specifies the maximum version for messages sent from cert services. This should be the minimum value that is supported by all of the deployed cert services. Possible values: Any valid OpenStack release name, in lower case, such as ‘mitaka’ or ‘liberty’. Alternatively, it can be any string representing a version number in the format ‘N.N’; for example, possible values might be ‘1.12’ or ‘2.0’. Services which consume this:
Related options:
|
compute = None |
(String) Set a version cap for messages sent to compute services. Set this option to “auto” if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment. Otherwise, you can set this to a specific version to pin this service to messages at a particular level. All services of a single type (i.e. compute) should be configured to use the same version, and it should be set to the minimum commonly-supported version of all those services in the deployment. |
conductor = None |
(String) Set a version cap for messages sent to conductor services |
console = None |
(String) Set a version cap for messages sent to console services |
consoleauth = None |
(String) Set a version cap for messages sent to consoleauth services |
intercell = None |
(String) Intercell version Intercell RPC API is the client side of the Cell<->Cell RPC API. Use this option to set a version cap for messages sent between cells services. Possible values:
Services which consume this:
Related options:
|
network = None |
(String) Set a version cap for messages sent to network services |
scheduler = None |
(String) Sets a version cap (limit) for messages sent to scheduler services. In the situation where there were multiple scheduler services running, and they were not being upgraded together, you would set this to the lowest deployed version to guarantee that other services never send messages that any of your running schedulers cannot understand. This is rarely needed in practice as most deployments run a single scheduler. It exists mainly for design compatibility with the other services, such as compute, which are routinely upgraded in a rolling fashion. Services that use this:
Related options:
|
Configuration option = Default value | Description |
---|---|
[vmware] | |
api_retry_count = 10 |
(Integer) Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. |
ca_file = None |
(String) Specifies the CA bundle file to be used in verifying the vCenter server certificate. |
cache_prefix = None |
(String) This option adds a prefix to the folder where cached images are stored This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. Note: This should only be used when the compute nodes are running on same host or they have a shared file system. Possible values:
|
cluster_name = None |
(String) Name of a VMware Cluster ComputeResource. |
console_delay_seconds = None |
(Integer) Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. |
datastore_regex = None |
(String) Regular expression pattern to match the name of datastore. The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex=”nas.*” selects all the data stores that have a name starting with “nas”. NOTE: If no regex is given, it just picks the datastore with the most freespace. Possible values:
|
host_ip = None |
(String) Hostname or IP address for connection to VMware vCenter host. |
host_password = None |
(String) Password for connection to VMware vCenter host. |
host_port = 443 |
(Port number) Port for connection to VMware vCenter host. |
host_username = None |
(String) Username for connection to VMware vCenter host. |
insecure = False |
(Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. Related options:
|
integration_bridge = None |
(String) This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. Possible values:
|
maximum_objects = 100 |
(Integer) This option specifies the limit on the maximum number of objects to return in a single result. A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. |
pbm_default_policy = None |
(String) This option specifies the default policy to be used. If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. Possible values:
Related options:
|
pbm_enabled = False |
(Boolean) This option enables or disables storage policy based placement of instances. Related options:
|
pbm_wsdl_location = None |
(String) This option specifies the PBM service WSDL file location URL. Setting this will disable storage policy based placement of instances. Possible values:
|
serial_port_proxy_uri = None |
(String) Identifies a proxy service that provides network access to the serial_port_service_uri. Possible values:
Related options: This option is ignored if serial_port_service_uri is not specified.
|
serial_port_service_uri = None |
(String) Identifies the remote system where the serial port traffic will be sent. This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. Possible values:
|
task_poll_interval = 0.5 |
(Floating point) Time interval in seconds to poll remote tasks invoked on VMware VC server. |
use_linked_clone = True |
(Boolean) This option enables/disables the use of linked clone. The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the OpenStack Image service. If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. |
wsdl_location = None |
(String) This option specifies VIM Service WSDL Location If vSphere API versions 5.1 and later is being used, this section can be ignored. If version is less than 5.1, WSDL files must be hosted locally and their location must be specified in the above section. Optional over-ride to default location for bug work-arounds. Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
daemon = False |
(Boolean) Run as a background process. |
key = None |
(String) SSL key file (if separate from cert). |
record = None |
(String) Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done. |
source_is_ipv6 = False |
(Boolean) Set to True if source host is addressed with IPv6. |
ssl_only = False |
(Boolean) Disallow non-encrypted connections. |
web = /usr/share/spice-html5 |
(String) Path to directory with content which will be served by a web server. |
[vmware] | |
vnc_port = 5900 |
(Port number) This option specifies VNC starting port. Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option ‘vnc_port’ helps you to set default starting port for the VNC client. Possible values:
Related options: Below options should be set to enable VNC client.
|
vnc_port_total = 10000 |
(Integer) Total number of VNC ports. |
[vnc] | |
enabled = True |
(Boolean) Enable VNC related features. Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest. |
keymap = en-us |
(String) Keymap for VNC. The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default. Possible values:
|
novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html |
(URI) Public address of noVNC VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions. Related options:
|
novncproxy_host = 0.0.0.0 |
(String) IP address that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private address to which the noVNC console proxy service should bind to. Related options:
|
novncproxy_port = 6080 |
(Port number) Port that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private port to which the noVNC console proxy service should bind to. Related options:
|
vncserver_listen = 127.0.0.1 |
(String) The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node. |
vncserver_proxyclient_address = 127.0.0.1 |
(String) Private, internal IP address or hostname of VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. This option sets the private address to which proxy clients, such as |
xvpvncproxy_base_url = http://127.0.0.1:6081/console |
(URI) Public URL address of XVP VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based. This option sets the public base URL to which client systems will connect. XVP clients can use this address to connect to the XVP instance and, by extension, the VNC sessions. Related options:
|
xvpvncproxy_host = 0.0.0.0 |
(String) IP address or hostname that the XVP VNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based. This option sets the private address to which the XVP VNC console proxy service should bind to. Related options:
|
xvpvncproxy_port = 6081 |
(Port number) Port that the XVP VNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. Xen provides the Xenserver VNC Proxy, or XVP, as an alternative to the websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, XVP clients are Java-based. This option sets the private port to which the XVP VNC console proxy service should bind to. Related options:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
block_device_allocate_retries = 60 |
(Integer) Number of times to retry block device allocation on failures. Starting with Liberty, Cinder can use image volume cache. This may help with block device allocation performance. Look at the cinder image_volume_cache_enabled configuration option. Possible values:
|
block_device_allocate_retries_interval = 3 |
(Integer) Waiting time interval (seconds) between block device allocation retries on failures |
my_block_storage_ip = $my_ip |
(String) The IP address which is used to connect to the block storage network. Possible values:
Related options:
|
volume_usage_poll_interval = 0 |
(Integer) Interval in seconds for gathering volume usages |
[cinder] | |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
catalog_info = volumev2:cinderv2:publicURL |
(String) Info to match when looking for cinder in the service catalog. Possible values:
Related options:
|
certfile = None |
(String) PEM encoded client certificate cert file |
cross_az_attach = True |
(Boolean) Allow attach between instance and volume in different availability zones. If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not “volume” because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or allow_availability_zone_fallback=False in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach. |
endpoint_template = None |
(String) If this option is set then it will override service catalog lookup with this template for cinder endpoint Possible values:
Related options:
|
http_retries = 3 |
(Integer) Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values:
|
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
os_region_name = None |
(String) Region name of this node. This is used when picking the URL in the service catalog. Possible values:
|
timeout = None |
(Integer) Timeout value for http requests |
[hyperv] | |
force_volumeutils_v1 = False |
(Boolean) DEPRECATED: Force V1 volume utility class |
volume_attach_retry_count = 10 |
(Integer) Volume attach retry count The number of times to retry to attach a volume. This option is used to avoid incorrectly returned no data when the system is under load. Volume attachment is retried until success or the given retry count is reached. To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and started automatically. Possible values:
Related options:
|
volume_attach_retry_interval = 5 |
(Integer) Volume attach retry interval Interval between volume attachment attempts, in seconds. Possible values:
Related options:
|
[libvirt] | |
glusterfs_mount_point_base = $state_path/mnt |
(String) Directory where the glusterfs volume is mounted on the compute node |
nfs_mount_options = None |
(String) Mount options passed to the NFS client. See section of the nfs man page for details |
nfs_mount_point_base = $state_path/mnt |
(String) Directory where the NFS volume is mounted on the compute node |
num_aoe_discover_tries = 3 |
(Integer) Number of times to rediscover AoE target to find volume |
num_iscsi_scan_tries = 5 |
(Integer) Number of times to rescan iSCSI target to find volume |
num_iser_scan_tries = 5 |
(Integer) Number of times to rescan iSER target to find volume |
qemu_allowed_storage_drivers = |
(List) Protocols listed here will be accessed directly from QEMU. Currently supported protocols: [gluster] |
rbd_secret_uuid = None |
(String) The libvirt UUID of the secret for the rbd_uservolumes |
rbd_user = None |
(String) The RADOS client name for accessing rbd volumes |
scality_sofs_config = None |
(String) Path or URL to Scality SOFS configuration file |
scality_sofs_mount_point = $state_path/scality |
(String) Base dir where Scality SOFS shall be mounted |
smbfs_mount_options = |
(String) Mount options passed to the SMBFS client. See mount.cifs man page for details. Note that the libvirt-qemu uid and gid must be specified. |
smbfs_mount_point_base = $state_path/mnt |
(String) Directory where the SMBFS shares are mounted on the compute node |
[xenserver] | |
block_device_creation_timeout = 10 |
(Integer) Time in secs to wait for a block device to be created |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dmz_cidr = |
(List) This option is a list of zero or more IP address ranges in your network’s DMZ that should be accepted. Possible values:
|
vpn_ip = $my_ip |
(String) This is the public IP address for the cloudpipe VPN servers. It defaults to the IP address of the host. Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’. Possible values:
Related options:
|
vpn_start = 1000 |
(Port number) This is the port number to use as the first VPN port for private networks. Please note that this option is only used when using nova-network instead of Neutron in your deployment. It also will be ignored if the configuration option for network_manager is not set to the default of ‘nova.network.manager.VlanManager’, or if you specify a value the ‘vpn_start’ parameter when creating a network. Possible values:
Related options:
|
Configuration option = Default value | Description |
---|---|
[wsgi] | |
api_paste_config = api-paste.ini |
(String) This option represents a file name for the paste.deploy config for nova-api. Possible values: * A string representing file name for the paste.deploy config. |
client_socket_timeout = 900 |
(Integer) This option specifies the timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0. |
default_pool_size = 1000 |
(Integer) This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option. |
keep_alive = True |
(Boolean) This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse. Possible values:
Related options:
|
max_header_line = 16384 |
(Integer) This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the previous response and beginning of the next. Hence, in a keep_alive case, all messages must have a self-defined message length. |
secure_proxy_ssl_header = None |
(String) This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy. Possible values:
|
ssl_ca_file = None |
(String) This option allows setting path to the CA certificate file that should be used to verify connecting clients. Possible values:
Related options:
|
ssl_cert_file = None |
(String) This option allows setting path to the SSL certificate of API server. Possible values:
Related options:
|
ssl_key_file = None |
(String) This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect. Possible values:
Related options:
|
tcp_keepidle = 600 |
(Integer) This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X. Related options:
|
wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f |
(String) It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. This option is used for building custom request loglines. Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
console_driver = nova.console.xvp.XVPConsoleProxy |
(String) Nova-console proxy is used to set up multi-tenant VM console access. This option allows pluggable driver program for the console session and represents driver to use for the console proxy. Possible values
|
[libvirt] | |
xen_hvmloader_path = /usr/lib/xen/boot/hvmloader |
(String) Location where the Xen hvmloader is kept |
[xenserver] | |
agent_path = usr/sbin/xe-update-networking |
(String) Path to locate guest agent on the server. Specifies the path in which the XenAPI guest agent should be located. If the agent is present, network configuration is not injected into the image. Related options: For this option to have an effect: * |
agent_resetnetwork_timeout = 60 |
(Integer) Number of seconds to wait for agent’s reply to resetnetwork request. This indicates the amount of time xapi ‘agent’ plugin waits for the agent to respond to the ‘resetnetwork’ request specifically. The generic timeout for agent communication |
agent_timeout = 30 |
(Integer) Number of seconds to wait for agent’s reply to a request. Nova configures/performs certain administrative actions on a server with the help of an agent that’s installed on the server. The communication between Nova and the agent is achieved via sharing messages, called records, over xenstore, a shared storage across all the domains on a Xenserver host. Operations performed by the agent on behalf of nova are: ‘version’,’ key_init’, ‘password’,’resetnetwork’,’inject_file’, and ‘agentupdate’. To perform one of the above operations, the xapi ‘agent’ plugin writes the command and its associated parameters to a certain location known to the domain and awaits response. On being notified of the message, the agent performs appropriate actions on the server and writes the result back to xenstore. This result is then read by the xapi ‘agent’ plugin to determine the success/failure of the operation. This config option determines how long the xapi ‘agent’ plugin shall wait to read the response off of xenstore for a given request/command. If the agent on the instance fails to write the result in this time period, the operation is considered to have timed out. Related options: * |
agent_version_timeout = 300 |
(Integer) Number of seconds to wait for agent’t reply to version request. This indicates the amount of time xapi ‘agent’ plugin waits for the agent to respond to the ‘version’ request specifically. The generic timeout for agent communication During the build process the ‘version’ request is used to determine if the agent is available/operational to perform other requests such as ‘resetnetwork’, ‘password’, ‘key_init’ and ‘inject_file’. If the ‘version’ call fails, the other configuration is skipped. So, this configuration option can also be interpreted as time in which agent is expected to be fully operational. |
cache_images = all |
(String) Cache glance images locally. The value for this option must be choosen from the choices listed here. Configuring a value other than these will default to ‘all’. Note: There is nothing that deletes these images. Possible values:
|
check_host = True |
(Boolean) Ensure compute service is running on host XenAPI connects to. This option must be set to false if the ‘independent_compute’ option is set to true. Possible values:
Related options:
|
connection_concurrent = 5 |
(Integer) Maximum number of concurrent XenAPI connections. Used only if compute_driver=xenapi.XenAPIDriver |
connection_password = None |
(String) Password for connection to XenServer/Xen Cloud Platform |
connection_url = None |
(String) URL for connection to XenServer/Xen Cloud Platform. A special value of unix://local can be used to connect to the local unix socket. Possible values:
|
connection_username = root |
(String) Username for connection to XenServer/Xen Cloud Platform |
default_os_type = linux |
(String) Default OS type used when uploading an image to glance |
disable_agent = False |
(Boolean) Disables the use of XenAPI agent. This configuration option suggests whether the use of agent should be enabled or not regardless of what image properties are present. Image properties have an effect only when this is set to Related options: * |
image_compression_level = None |
(Integer) Compression level for images. By setting this option we can configure the gzip compression level. This option sets GZIP environment variable before spawning tar -cz to force the compression level. It defaults to none, which means the GZIP environment variable is not set and the default (usually -6) is used. Possible values:
|
image_upload_handler = nova.virt.xenapi.image.glance.GlanceStore |
(String) Dom0 plugin driver used to handle image uploads. |
independent_compute = False |
(Boolean) Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host. Related options:
|
introduce_vdi_retry_wait = 20 |
(Integer) Number of seconds to wait for SR to settle if the VDI does not exist when first introduced. Some SRs, particularly iSCSI connections are slow to see the VDIs right after they got introduced. Setting this option to a time interval will make the SR to wait for that time period before raising VDI not found exception. |
ipxe_boot_menu_url = None |
(String) URL to the iPXE boot menu. An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image. By default this option is not set. Enable this option to boot an iPXE ISO. Related Options:
|
ipxe_mkisofs_cmd = mkisofs |
(String) Name and optionally path of the tool used for ISO image creation. An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image. Note: By default mkisofs is not present in the Dom0, so the package can either be manually added to Dom0 or include the mkisofs binary in the image itself. Related Options:
|
ipxe_network_name = None |
(String) Name of network to use for booting iPXE ISOs. An iPXE ISO is a specially crafted ISO which supports iPXE booting. This feature gives a means to roll your own image. By default this option is not set. Enable this option to boot an iPXE ISO. Related Options:
|
login_timeout = 10 |
(Integer) Timeout in seconds for XenAPI login. |
max_kernel_ramdisk_size = 16777216 |
(Integer) Maximum size in bytes of kernel or ramdisk images. Specifying the maximum size of kernel or ramdisk will avoid copying large files to dom0 and fill up /boot/guest. |
num_vbd_unplug_retries = 10 |
(Integer) Maximum number of retries to unplug VBD. If set to 0, should try once, no retries. |
ovs_integration_bridge = xapi1 |
(String) The name of the integration Bridge that is used with xenapi when connecting with Open vSwitch. Note: The value of this config option is dependent on the environment, therefore this configuration value must be set accordingly if you are using XenAPI. Possible options:
|
remap_vbd_dev = False |
(Boolean) Used to enable the remapping of VBD dev. (Works around an issue in Ubuntu Maverick) |
remap_vbd_dev_prefix = sd |
(String) Specify prefix to remap VBD dev to (ex. /dev/xvdb -> /dev/sdb). Related options:
|
running_timeout = 60 |
(Integer) Number of seconds to wait for instance to go to running state |
sparse_copy = True |
(Boolean) Whether to use sparse_copy for copying data on a resize down. (False will use standard dd). This speeds up resizes down considerably since large runs of zeros won’t have to be rsynced. |
sr_base_path = /var/run/sr-mount |
(String) Base path to the storage repository on the XenServer host. |
sr_matching_filter = default-sr:true |
(String) Filter for finding the SR to be used to install guest instances on. Possible values:
|
target_host = None |
(String) The iSCSI Target Host. This option represents the hostname or ip of the iSCSI Target. If the target host is not present in the connection information from the volume provider then the value from this option is taken. Possible values:
|
target_port = 3260 |
(String) The iSCSI Target Port. This option represents the port of the iSCSI Target. If the target port is not present in the connection information from the volume provider then the value from this option is taken. |
torrent_base_url = None |
(String) Base URL for torrent files; must contain a slash character (see RFC 1808, step 6) |
torrent_download_stall_cutoff = 600 |
(Integer) Number of seconds a download can remain at the same progress percentage w/o being considered a stall |
torrent_images = none |
(String) Whether or not to download images via Bit Torrent. The value for this option must be choosen from the choices listed here. Configuring a value other than these will default to ‘none’. Possible values:
|
torrent_listen_port_end = 6891 |
(Port number) End of port range to listen on |
torrent_listen_port_start = 6881 |
(Port number) Beginning of port range to listen on |
torrent_max_last_accessed = 86400 |
(Integer) Cached torrent files not accessed within this number of seconds can be reaped |
torrent_max_seeder_processes_per_host = 1 |
(Integer) Maximum number of seeder processes to run concurrently within a given dom0. (-1 = no limit) |
torrent_seed_chance = 1.0 |
(Floating point) Probability that peer will become a seeder. (1.0 = 100%) |
torrent_seed_duration = 3600 |
(Integer) Number of seconds after downloading an image via BitTorrent that it should be seeded for other peers. |
use_agent_default = False |
(Boolean) Whether or not to use the agent by default when its usage is enabled but not indicated by the image. The use of XenAPI agent can be disabled altogether using the configuration option Note that if this configuration is set to Related options: * |
use_join_force = True |
(Boolean) When adding new host to a pool, this will append a –force flag to the command, forcing hosts to join a pool, even if they have different CPUs. Since XenServer version 5.6 it is possible to create a pool of hosts that have different CPU capabilities. To accommodate CPU differences, XenServer limited features it uses to determine CPU compatibility to only the ones that are exposed by CPU and support for CPU masking was added. Despite this effort to level differences between CPUs, it is still possible that adding new host will fail, thus option to force join was introduced. |
vhd_coalesce_max_attempts = 20 |
(Integer) Max number of times to poll for VHD to coalesce. This option determines the maximum number of attempts that can be made for coalescing the VHD before giving up. Related opitons:
|
vhd_coalesce_poll_interval = 5.0 |
(Floating point) The interval used for polling of coalescing vhds. This is the interval after which the task of coalesce VHD is performed, until it reaches the max attempts that is set by vhd_coalesce_max_attempts. Related options:
|
vif_driver = nova.virt.xenapi.vif.XenAPIBridgeDriver |
(String) The XenAPI VIF driver using XenServer Network APIs. |
[xvp] | |
console_xvp_conf = /etc/xvp.conf |
(String) Generated XVP conf file |
console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template |
(String) XVP conf template |
console_xvp_log = /var/log/xvp.log |
(String) XVP log file |
console_xvp_multiplex_port = 5900 |
(Port number) Port for XVP to multiplex VNC connections on |
console_xvp_pid = /var/run/xvp.pid |
(String) XVP master process pid file |
Option = default value | (Type) Help string |
---|---|
[DEFAULT] pointer_model = usbtablet |
(String) Generic property to specify the pointer type. |
[DEFAULT] sync_power_state_pool_size = 1000 |
(Integer) Number of greenthreads available for use to sync power states. |
[DEFAULT] vendordata_dynamic_connect_timeout = 5 |
(Integer) Maximum wait time for an external REST service to connect. |
[DEFAULT] vendordata_dynamic_read_timeout = 5 |
(Integer) Maximum wait time for an external REST service to return data once connected. |
[DEFAULT] vendordata_dynamic_ssl_certfile = |
(String) Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against. |
[DEFAULT] vendordata_dynamic_targets = |
(List) A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url>. |
[DEFAULT] vendordata_providers = |
(List) A list of vendordata providers. |
[barbican] auth_endpoint = http://localhost:5000/v3 |
(String) Use this endpoint to connect to Keystone |
[barbican] barbican_api_version = None |
(String) Version of the Barbican API, for example: “v1” |
[barbican] barbican_endpoint = None |
(String) Use this endpoint to connect to Barbican, for example: “http://localhost:9311/“ |
[barbican] number_of_retries = 60 |
(Integer) Number of times to retry poll for key creation completion |
[barbican] retry_delay = 1 |
(Integer) Number of seconds to wait before retrying poll for key creation completion |
[cloudpipe] boot_script_template = $pybasedir/nova/cloudpipe/bootscript.template |
(String) Template for cloudpipe instance boot script. |
[cloudpipe] dmz_mask = 255.255.255.0 |
(Unknown) Netmask to push into OpenVPN config. |
[cloudpipe] dmz_net = 10.0.0.0 |
(Unknown) Network to push into OpenVPN config. |
[cloudpipe] vpn_flavor = m1.tiny |
(String) Flavor for VPN instances. |
[cloudpipe] vpn_image_id = 0 |
(String) Image ID used when starting up a cloudpipe VPN client. |
[cloudpipe] vpn_key_suffix = -vpn |
(String) Suffix to add to project name for VPN key and secgroups |
[crypto] ca_file = cacert.pem |
(String) Filename of root CA (Certificate Authority). This is a container format and includes root certificates. |
[crypto] ca_path = $state_path/CA |
(String) Directory path where root CA is located. |
[crypto] crl_file = crl.pem |
(String) Filename of root Certificate Revocation List (CRL). This is a list of certificates that have been revoked, and therefore, entities presenting those (revoked) certificates should no longer be trusted. |
[crypto] key_file = private/cakey.pem |
(String) Filename of a private key. |
[crypto] keys_path = $state_path/keys |
(String) Directory path where keys are located. |
[crypto] project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s |
(String) Subject for certificate for projects, %s for project, timestamp |
[crypto] use_project_ca = False |
(Boolean) Option to enable/disable use of CA for each project. |
[crypto] user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s |
(String) Subject for certificate for users, %s for project, user, timestamp |
[glance] debug = False |
(Boolean) Enable or disable debug logging with glanceclient. |
[glance] use_glance_v1 = False |
(Boolean) DEPRECATED: This flag allows reverting to glance v1 if for some reason glance v2 doesn’t work in your environment. This will only exist in Newton, and a fully working Glance v2 will be a hard requirement in Ocata. |
[hyperv] enable_remotefx = False |
(Boolean) Enable RemoteFX feature |
[ironic] auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
[ironic] auth_type = None |
(Unknown) Authentication type to load |
[ironic] certfile = None |
(String) PEM encoded client certificate cert file |
[ironic] insecure = False |
(Boolean) Verify HTTPS connections. |
[ironic]keyfile = None |
(String) PEM encoded client certificate key file |
[ironic] timeout = None |
(Integer) Timeout value for http requests |
[key_manager] api_class = castellan.key_manager.barbican_key_manager.BarbicanKeyManager |
(String) The full class name of the key manager API class |
[key_manager] fixed_key = None |
(String) Fixed key returned by key manager, specified in hex. |
[libvirt] enabled_perf_events = |
(String) Override the default disk prefix for the devices attached to an instance. |
[libvirt] vzstorage_cache_path = None |
(String) Path to the SSD cache file. |
[libvirt] vzstorage_log_path = /var/log/pstorage/%(cluster_name)s/nova.log.gz |
(String) Path to vzstorage client log. |
[libvirt] vzstorage_mount_group = qemu |
(String) Mount owner group name. |
[libvirt] vzstorage_mount_opts = |
(List) Extra mount options for pstorage-mount |
[libvirt] vzstorage_mount_perms = 0770 |
(String) Mount access mode. |
[libvirt] vzstorage_mount_point_base = $state_path/mnt |
(String) Directory where the Virtuozzo Storage clusters are mounted on the compute node. |
[libvirt] vzstorage_mount_user = stack |
(String) Mount owner user name. |
[os_vif_linux_bridge] flat_interface = None |
(String) FlatDhcp will bridge into this interface if set |
[os_vif_linux_bridge] forward_bridge_interface = ['all'] |
(Multi-valued) An interface that bridges can forward to. If this is set to all then all traffic will be forwarded. Can be specified multiple times. |
[os_vif_linux_bridge] iptables_bottom_regex = |
(String) Regular expression to match the iptables rule that should always be on the bottom. |
[os_vif_linux_bridge] iptables_drop_action = DROP |
(String) The table that iptables to jump to when a packet is to be dropped. |
[os_vif_linux_bridge] iptables_top_regex = |
(String) Regular expression to match the iptables rule that should always be on the top. |
[os_vif_linux_bridge] network_device_mtu = 1500 |
(Integer) MTU setting for network interface. |
[os_vif_linux_bridge] use_ipv6 = False |
(Boolean) Use IPv6 |
[os_vif_linux_bridge] vlan_interface = None |
(String) VLANs will bridge into this interface if set |
[os_vif_ovs] network_device_mtu = 1500 |
(Integer) MTU setting for network interface. |
[os_vif_ovs] ovs_vsctl_timeout = 120 |
(Integer) Amount of time, in seconds, that ovs_vsctl should wait for a response from the database. 0 is to wait forever. |
[remote_debug] host = None |
(String) Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host. |
[remote_debug] port = None |
(Port number) Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host. |
[vif_plug_linux_bridge_privileged] capabilities = [] |
(Unknown) List of Linux capabilities retained by the privsep daemon. |
[vif_plug_linux_bridge_privileged] group = None |
(String) Group that the privsep daemon should run as. |
[vif_plug_linux_bridge_privileged] helper_command = None |
(String) Command to invoke to start the privsep daemon if not using the “fork” method. |
[vif_plug_linux_bridge_privileged] user = None |
(String) User that the privsep daemon should run as. |
[vif_plug_ovs_privileged] capabilities = [] |
(Unknown) List of Linux capabilities retained by the privsep daemon. |
[vif_plug_ovs_privileged] group = None |
(String) Group that the privsep daemon should run as. |
[vif_plug_ovs_privileged] helper_command = None |
(String) Command to invoke to start the privsep daemon if not using the “fork” method. |
[vif_plug_ovs_privileged] user = None |
(String) User that the privsep daemon should run as. |
[wsgi] api_paste_config = api-paste.ini |
(String) This option represents a file name for the paste.deploy config for nova-api. |
[wsgi] client_socket_timeout = 900 |
(Integer) This option specifies the timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0. |
[wsgi] default_pool_size = 1000 |
(Integer) This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option. |
[wsgi] keep_alive = True |
(Boolean) This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse. |
[wsgi] max_header_line = 16384 |
(Integer) This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). |
[wsgi] secure_proxy_ssl_header = None |
(String) This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy. |
[wsgi] ssl_ca_file = None |
(String) This option allows setting path to the CA certificate file that should be used to verify connecting clients. |
[wsgi] ssl_cert_file = None |
(String) This option allows setting path to the SSL certificate of API server. |
[wsgi] ssl_key_file = None |
(String) This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect. |
[wsgi] tcp_keepidle = 600 |
(Integer) This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X. |
[wsgi] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f |
(String) It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. |
[xenserver] independent_compute = False |
(Boolean) Used to prevent attempts to attach VBDs locally, so Nova can be run in a VM on a different host. |
[xvp] console_xvp_conf = /etc/xvp.conf |
(String) Generated XVP conf file |
[xvp] console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template |
(String) XVP conf template |
[xvp] console_xvp_log = /var/log/xvp.log |
(String) XVP log file |
[xvp] console_xvp_multiplex_port = 5900 |
(Port number) Port for XVP to multiplex VNC connections on |
[xvp] console_xvp_pid = /var/run/xvp.pid |
(String) XVP master process pid file |
Option | Previous default value | New default value |
---|---|---|
[ironic] api_endpoint |
None |
http://ironic.example.org:6385/ |
[neutron] region_name |
None |
RegionOne |
Deprecated option | New Option |
---|---|
[DEFAULT] cert_manager |
None |
[DEFAULT] cert_topic |
None |
[DEFAULT] compute_available_monitors |
None |
[DEFAULT] compute_manager |
None |
[DEFAULT] compute_stats_class |
None |
[DEFAULT] console_manager |
None |
[DEFAULT] consoleauth_manager |
None |
[DEFAULT] default_flavor |
None |
[DEFAULT] driver |
None |
[DEFAULT] enable_network_quota |
None |
[DEFAULT] fatal_exception_format_errors |
None |
[DEFAULT] image_decryption_dir |
None |
[DEFAULT] manager |
None |
[DEFAULT] metadata_manager |
None |
[DEFAULT] quota_driver |
None |
[DEFAULT] quota_networks |
None |
[DEFAULT] s3_access_key |
None |
[DEFAULT] s3_affix_tenant |
None |
[DEFAULT] s3_host |
None |
[DEFAULT] s3_port |
None |
[DEFAULT] s3_secret_key |
None |
[DEFAULT] s3_use_ssl |
None |
[DEFAULT] scheduler_manager |
None |
[DEFAULT] secure_proxy_ssl_header |
None |
[DEFAULT] share_dhcp_address |
None |
[DEFAULT] snapshot_name_template |
None |
[DEFAULT] use_local |
None |
[DEFAULT] vendordata_driver |
None |
[barbican] catalog_info |
None |
[barbican] endpoint_template |
None |
[barbican] os_region_name |
None |
[glance] admin_password |
None |
[glance] filesystems |
None |
[glance] use_glance_v1 |
None |
[hyperv] force_volumeutils_v1 |
None |
[ironic] admin_tenant_name |
None |
[ironic] admin_url |
None |
[ironic] admin_username |
None |
[libvirt] checksum_base_images |
None |
[libvirt] checksum_interval_seconds |
None |
[libvirt] image_info_filename_pattern |
None |
[libvirt] use_usb_tablet |
None |
[matchmaker_redis] host |
None |
[matchmaker_redis] password |
None |
[matchmaker_redis] port |
None |
[matchmaker_redis] sentinel_hosts |
None |
[osapi_v21] extensions_blacklist |
None |
[osapi_v21] extensions_whitelist |
None |
[osapi_v21] project_id_regex |
None |
A list of config options based on different topics can be found below:
The nova.conf
configuration file is an
INI file format
as explained in Configuration file format.
You can use a particular configuration option file by using the option
(nova.conf
) parameter when you run one of the nova-*
services.
This parameter inserts configuration option definitions from the
specified configuration file name, which might be useful for debugging
or performance tuning.
For a list of configuration options, see the tables in this guide.
To learn more about the nova.conf
configuration file,
review the general purpose configuration options documented in
the table Description of common configuration options.
Important
Do not specify quotes around nova options.
Configuration options are grouped by section. The Compute configuration file supports the following sections:
nova-conductor
service.The Compute API, run by the nova-api
daemon, is the component of
OpenStack Compute that receives and responds to user requests,
whether they be direct API calls, or via the CLI tools or dashboard.
The OpenStack Compute API enables users to specify an administrative password when they create or rebuild a server instance. If the user does not specify a password, a random password is generated and returned in the API response.
In practice, how the admin password is handled depends on the hypervisor in use and might require additional configuration of the instance. For example, you might have to install an agent to handle the password setting. If the hypervisor and instance configuration do not support setting a password at server create time, the password that is returned by the create API call is misleading because it was ignored.
To prevent this confusion, use the enable_instance_password
configuration option to disable the return of the admin password
for installations that do not support setting instance passwords.
The Compute API configuration options are documented in the tables below.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
enable_new_services = True |
(Boolean) Enable new services on this host automatically. When a new service (for example “nova-compute”) starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new services in disabled state and then enabled them at a later point in time. This option can set this behavior for all services per host. Possible values:
|
enabled_apis = osapi_compute, metadata |
(List) A list of APIs to enable by default |
enabled_ssl_apis = |
(List) A list of APIs with enabled SSL |
instance_name_template = instance-%08x |
(String) Template string to be used to generate instance names. This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like Possible values:
Related options:
|
multi_instance_display_name_template = %(name)s-%(count)d |
(String) When creating multiple instances with a single request using the os-multiple-create API extension, this template will be used to build the display name for each instance. The benefit is that the instances end up with different hostnames. Example display names when creating two VM’s: name-1, name-2. Possible values:
|
non_inheritable_image_properties = cache_in_nova, bittorrent |
(List) Image properties that should not be inherited from the instance when taking a snapshot. This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots. Possible values:
|
null_kernel = nokernel |
(String) This option is used to decide when an image should have no external ramdisk or kernel. By default this is set to ‘nokernel’, so when an image is booted with the property ‘kernel_id’ with the value ‘nokernel’, Nova assumes the image doesn’t require an external kernel and ramdisk. |
osapi_compute_link_prefix = None |
(String) This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged. Possible values:
|
osapi_compute_listen = 0.0.0.0 |
(String) The IP address on which the OpenStack API will listen. |
osapi_compute_listen_port = 8774 |
(Port number) The port on which the OpenStack API will listen. |
osapi_compute_workers = None |
(Integer) Number of workers for OpenStack API service. The default will be the number of CPUs available. |
osapi_hide_server_address_states = building |
(List) This option is a list of all instance states for which network address information should not be returned from the API. Possible values:
|
servicegroup_driver = db |
(String) This option specifies the driver to be used for the servicegroup service. ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver. Possible Values:
Related Options:
|
snapshot_name_template = snapshot-%s |
(String) DEPRECATED: Template string to be used to generate snapshot names This is not used anymore and will be removed in the O release. |
use_forwarded_for = False |
(Boolean) When True, the ‘X-Forwarded-For’ header is treated as the canonical remote address. When False (the default), the ‘remote_address’ header is used. You should only enable this if you have an HTML sanitizing proxy. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[oslo_versionedobjects] | |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal |
Configuration option = Default value | Description |
---|---|
[osapi_v21] | |
extensions_blacklist = |
(List) DEPRECATED: This option is a list of all of the v2.1 API extensions to never load. However, it will be removed in the near future, after which all the functionality that was previously in extensions will be part of the standard API, and thus always accessible. Possible values:
Related options:
|
extensions_whitelist = |
(List) DEPRECATED: This is a list of extensions. If it is empty, then all extensions except those specified in the extensions_blacklist option will be loaded. If it is not empty, then only those extensions in this list will be loaded, provided that they are also not in the extensions_blacklist option. Once this deprecated option is removed, after which the all the functionality that was previously in extensions will be part of the standard API, and thus always accessible. Possible values:
Related options:
|
project_id_regex = None |
(String) DEPRECATED: This option is a string representing a regular expression (regex) that matches the project_id as contained in URLs. If not set, it will match normal UUIDs created by keystone. Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cert = self.pem |
(String) Path to SSL certificate file. |
cert_manager = nova.cert.manager.CertManager |
(String) DEPRECATED: Full class name for the Manager for cert |
cert_topic = cert |
(String) DEPRECATED: Determines the RPC topic that the cert nodes listen on. For most deployments there is no need to ever change it. Since the nova-cert service is marked for deprecation, the feature to change RPC topic that cert nodes listen may be removed as early as the 15.0.0 |
Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale according to user needs. For this feature to work properly, you might need to configure some underlying virt layers.
Resize on KVM is implemented currently by transferring the images between compute nodes over ssh. For KVM you need hostnames to resolve properly and passwordless ssh access between your compute hosts. Direct access from one compute host to another is needed to copy the VM file across.
Cloud end users can find out how to resize a server by reading the OpenStack End User Guide.
To get resize to work with XenServer (and XCP), you need
to establish a root trust between all hypervisor nodes and
provide an /image
mount point to your hypervisors dom0.
You can configure OpenStack Compute to use any SQLAlchemy-compatible database.
The database name is nova
. The nova-conductor
service is the only
service that writes to the database. The other Compute services access
the database through the nova-conductor
service.
To ensure that the database schema is current, run the following command:
# nova-manage db sync
If nova-conductor
is not used, entries to the database are mostly
written by the nova-scheduler
service, although all services must
be able to update entries in the database.
In either case, use the configuration option settings documented in Database configurations to configure the connection string for the nova database.
Fibre Channel support in OpenStack Compute is remote block storage attached to compute nodes for VMs.
Fibre Channel supported only the KVM hypervisor.
Compute and Block Storage support Fibre Channel automatic zoning on Brocade and Cisco switches. On other hardware Fibre Channel arrays must be pre-zoned or directly attached to the KVM hosts.
You must install these packages on the KVM host:
sysfsutils
- Nova uses the systool
application in this package.sg3-utils
or sg3_utils
- Nova uses the sg_scan
and
sginfo
applications.Installing the multipath-tools
or device-mapper-multipath
package is optional.
Note
iSCSI interface and offload support is only present since Kilo.
Compute supports open-iscsi iSCSI interfaces for offload cards.
Offload hardware must be present and configured on every compute
node where offload is desired. Once an open-iscsi interface is
configured, the iface name (iface.iscsi_ifacename
) should be
passed to libvirt via the iscsi_iface
parameter for use.
All iSCSI sessions will be bound to this iSCSI interface.
Currently supported transports (iface.transport_name
) are
be2iscsi
, bnx2i
, cxgb3i
, cxgb4i
, qla4xxx
, ocs
.
Configuration changes are required on the compute node only.
iSER is supported using the separate iSER LibvirtISERVolumeDriver and will be rejected if used via the iscsi_iface parameter.
Note the distinction between the transport name (iface.transport_name
)
and iface name (iface.iscsi_ifacename
). The actual iface name must be
specified via the iscsi_iface parameter to libvirt for offload to work.
The default name for an iSCSI iface (open-iscsi parameter
iface.iscsi_ifacename
) is in the format transport_name.hwaddress
when generated by iscsiadm
.
iscsiadm
can be used to view and generate current iface configuration.
Every network interface that supports an open-iscsi transport can have one
or more iscsi ifaces associated with it. If no ifaces have been configured
for a network interface supported by an open-iscsi transport,
this command will create a default iface configuration for that
network interface. For example :
# iscsiadm -m iface
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
bnx2i.00:05:b5:d2:a0:c2 bnx2i,00:05:b5:d2:a0:c2,5.10.10.20,<empty>,<empty>
The output is in the format:
iface_name transport_name,hwaddress,ipaddress,
net_ifacename,initiatorname
.
Individual iface configuration can be viewed via
# iscsiadm -m iface -I IFACE_NAME
# BEGIN RECORD 2.0-873
iface.iscsi_ifacename = cxgb4i.00:07:43:28:b2:58
iface.net_ifacename = <empty>
iface.ipaddress = 102.50.50.80
iface.hwaddress = 00:07:43:28:b2:58
iface.transport_name = cxgb4i
iface.initiatorname = <empty>
# END RECORD
Configuration can be updated as desired via
# iscsiadm -m iface-I IFACE_NAME--op=update -n iface.SETTING -v VALUE
All iface configurations need a minimum of iface.iface_name
,
iface.transport_name
and iface.hwaddress
to be correctly
configured to work. Some transports may require iface.ipaddress
and iface.net_ifacename
as well to bind correctly.
Detailed configuration instructions can be found at http://www.open-iscsi.org/docs/README.
The node where the nova-compute
service is installed and
operates on the same node that runs all of the virtual machines.
This is referred to as the compute node in this guide.
By default, the selected hypervisor is KVM. To change to another
hypervisor, change the virt_type
option in the [libvirt]
section of nova.conf
and restart the nova-compute
service.
Here are the general nova.conf
options that are used to
configure the compute node’s hypervisor: Description of hypervisor configuration options
Specific options for particular hypervisors can be found in the following sections.
KVM is configured as the default hypervisor for Compute.
Note
This document contains several sections about hypervisor selection.
If you are reading this document linearly, you do not want to load
the KVM module before you install nova-compute
.
The nova-compute
service depends on qemu-kvm, which installs
/lib/udev/rules.d/45-qemu-kvm.rules
, which sets the correct
permissions on the /dev/kvm
device node.
To enable KVM explicitly, add the following configuration options to the
/etc/nova/nova.conf
file:
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = kvm
The KVM hypervisor supports the following virtual machine image formats:
This section describes how to enable KVM on your system. For more information, see the following distribution-specific documentation:
Red Hat Enterprise Linux
Virtualization Host Configuration and Guest Installation Guide
.Virtualization Guide
.The following sections outline how to enable KVM based hardware
virtualization on different architectures and platforms.
To perform these steps, you must be logged in as the root
user.
To determine whether the svm
or vmx
CPU extensions are present,
run this command:
# grep -E 'svm|vmx' /proc/cpuinfo
This command generates output if the CPU is capable of hardware-virtualization. Even if output is shown, you might still need to enable virtualization in the system BIOS for full support.
If no output appears, consult your system documentation to ensure that your CPU and motherboard support hardware virtualization. Verify that any relevant hardware virtualization options are enabled in the system BIOS.
The BIOS for each manufacturer is different. If you must enable
virtualization in the BIOS, look for an option containing the words
virtualization
, VT
, VMX
, or SVM
.
To list the loaded kernel modules and verify that the kvm
modules
are loaded, run this command:
# lsmod | grep kvm
If the output includes kvm_intel
or kvm_amd
, the kvm
hardware
virtualization modules are loaded and your kernel meets the module
requirements for OpenStack Compute.
If the output does not show that the kvm
module is loaded, run this
command to load it:
# modprobe -a kvm
Run the command for your CPU. For Intel, run this command:
# modprobe -a kvm-intel
For AMD, run this command:
# modprobe -a kvm-amd
Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.
If the kernel modules do not load automatically, use the procedures listed in these subsections.
If the checks indicate that required hardware virtualization support or kernel modules are disabled or unavailable, you must either enable this support on the system or find a system with this support.
Note
Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the previous command did not produce output, reboot your machine, enter the system BIOS, and enable the VT option.
If KVM acceleration is not supported, configure Compute to use a different
hypervisor, such as QEMU
or Xen
. See QEMU or
XenServer (and other XAPI based Xen variants) for details.
These procedures help you load the kernel modules for Intel-based and AMD-based processors if they do not load automatically during KVM installation.
Intel-based processors
If your compute host is Intel-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-intel
Add these lines to the /etc/modules
file so that these modules load
on reboot:
kvm
kvm-intel
AMD-based processors
If your compute host is AMD-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-amd
Add these lines to /etc/modules
file so that these modules load
on reboot:
kvm
kvm-amd
KVM as a hypervisor is supported on POWER system’s PowerNV platform.
To determine if your POWER platform supports KVM based virtualization run the following command:
# cat /proc/cpuinfo | grep PowerNV
If the previous command generates the following output, then CPU supports KVM based virtualization.
platform: PowerNV
If no output is displayed, then your POWER platform does not support KVM based hardware virtualization.
To list the loaded kernel modules and verify that the kvm
modules are loaded, run the following command:
# lsmod | grep kvm
If the output includes kvm_hv
, the kvm
hardware
virtualization modules are loaded and your kernel meets
the module requirements for OpenStack Compute.
If the output does not show that the kvm
module is loaded,
run the following command to load it:
# modprobe -a kvm
For PowerNV platform, run the following command:
# modprobe -a kvm-hv
Because a KVM installation can change user group membership, you might need to log in again for changes to take effect.
Backing Storage is the storage used to provide the expanded operating system
image, and any ephemeral storage. Inside the virtual machine, this is normally
presented as two virtual hard disks (for example, /dev/vda
and /dev/vdb
respectively). However, inside OpenStack, this can be derived from one of three
methods: lvm
, qcow
or raw
, chosen using the images_type
option
in nova.conf
on the compute node.
QCOW is the default backing store. It uses a copy-on-write philosophy to delay allocation of storage until it is actually needed. This means that the space required for the backing of an image can be significantly less on the real disk than what seems available in the virtual machine operating system.
RAW creates files without any sort of file formatting, effectively creating files with the plain binary one would normally see on a real disk. This can increase performance, but means that the entire size of the virtual disk is reserved on the physical disk.
Local LVM volumes can also be
used. Set images_volume_group = nova_local
where nova_local
is the name
of the LVM group you have created.
The Compute service enables you to control the guest CPU model that is exposed to KVM virtual machines. Use cases include:
In libvirt, the CPU is specified by providing a base CPU model name
(which is a shorthand for a set of feature flags), a set of additional
feature flags, and the topology (sockets/cores/threads).
The libvirt KVM driver provides a number of standard CPU model names.
These models are defined in the /usr/share/libvirt/cpu_map.xml
file.
Check this file to determine which models are supported by your local
installation.
Two Compute configuration options in the [libvirt]
group of
nova.conf
define which type of CPU model is exposed to the
hypervisor when using KVM: cpu_mode
and cpu_model
.
The cpu_mode
option can take one of the following values:
none
, host-passthrough
, host-model
, and custom
.
If your nova.conf
file contains cpu_mode=host-model
, libvirt
identifies the CPU model in /usr/share/libvirt/cpu_map.xml
file
that most closely matches the host, and requests additional CPU flags
to complete the match. This configuration provides the maximum functionality
and performance and maintains good reliability and compatibility if the
guest is migrated to another host with slightly different host CPUs.
If your nova.conf
file contains cpu_mode=host-passthrough
,
libvirt tells KVM to pass through the host CPU with no modifications.
The difference to host-model, instead of just matching feature flags,
every last detail of the host CPU is matched. This gives the best
performance, and can be important to some apps which check low level
CPU details, but it comes at a cost with respect to migration.
The guest can only be migrated to a matching host CPU.
If your nova.conf
file contains cpu_mode=custom
, you can
explicitly specify one of the supported named models using the cpu_model
configuration option. For example, to configure the KVM guests to expose
Nehalem CPUs, your nova.conf
file should contain:
[libvirt]
cpu_mode = custom
cpu_model = Nehalem
If your nova.conf
file contains cpu_mode=none
, libvirt does not
specify a CPU model. Instead, the hypervisor chooses the default model.
Use guest agents to enable optional access between compute nodes and guests through a socket, using the QMP protocol.
To enable this feature, you must set hw_qemu_guest_agent=yes
as a
metadata parameter on the image you wish to use to create the
guest-agent-capable instances from. You can explicitly disable the
feature by setting hw_qemu_guest_agent=no
in the image metadata.
The VHostNet kernel module improves network performance. To load the kernel module, run the following command as root:
# modprobe vhost_net
Trying to launch a new virtual machine instance fails with the
ERROR
state, and the following error appears in the
/var/log/nova/nova-compute.log
file:
libvirtError: internal error no supported architecture for os type 'hvm'
This message indicates that the KVM kernel modules were not loaded.
If you cannot start VMs after installation without rebooting,
the permissions might not be set correctly. This can happen
if you load the KVM module before you install nova-compute
.
To check whether the group is set to kvm
, run:
# ls -l /dev/kvm
If it is not set to kvm
, run:
# udevadm trigger
From the perspective of the Compute service, the QEMU hypervisor is very similar to the KVM hypervisor. Both are controlled through libvirt, both support the same feature set, and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment.
The typical uses cases for QEMU are
To enable QEMU, add these settings to nova.conf
:
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = qemu
For some operations you may also have to install the guestmount utility:
On Ubuntu:
# apt-get install guestmount
On Red Hat Enterprise Linux, Fedora, or CentOS:
# yum install libguestfs-tools
On openSUSE:
# zypper install guestfs-tools
The QEMU hypervisor supports the following virtual machine image formats:
This section describes XAPI managed hypervisors, and how to use them with OpenStack.
A hypervisor that provides the fundamental isolation between virtual machines. Xen is open source (GPLv2) and is managed by XenProject.org, a cross-industry organization and a Linux Foundation Collaborative project.
Xen is a component of many different products and projects. The hypervisor itself is very similar across all these projects, but the way that it is managed can be different, which can cause confusion if you’re not clear which toolstack you are using. Make sure you know what toolstack you want before you get started. If you want to use Xen with libvirt in OpenStack Compute refer to Xen via libvirt.
XAPI is one of the toolstacks that could control a Xen based hypervisor. XAPI’s role is similar to libvirt’s in the KVM world. The API provided by XAPI is called XenAPI. To learn more about the provided interface, look at XenAPI Object Model Overview for definitions of XAPI specific terms such as SR, VDI, VIF and PIF.
OpenStack has a compute driver which talks to XAPI, therefore all XAPI managed servers could be used with OpenStack.
XenAPI is the API provided by XAPI. This name is also used by the python library that is a client for XAPI. A set of packages to use XenAPI on existing distributions can be built using the xenserver/buildroot project.
An Open Source virtualization platform that delivers all features needed for any server and datacenter implementation including the Xen hypervisor and XAPI for the management. For more information and product downloads, visit xenserver.org.
XCP is not supported anymore. XCP project recommends all XCP users to upgrade to the latest version of XenServer by visiting xenserver.org.
A Xen host runs a number of virtual machines, VMs, or domains
(the terms are synonymous on Xen). One of these is in charge of
running the rest of the system, and is known as domain 0, or
dom0. It is the first domain to boot after Xen, and owns the
storage and networking hardware, the device drivers, and the
primary control software. Any other VM is unprivileged, and is
known as a domU or guest. All customer VMs are unprivileged,
but you should note that on XenServer (and other XenAPI using
hypervisors), the OpenStack Compute service (nova-compute
)
also runs in a domU. This gives a level of security isolation
between the privileged system software and the OpenStack software
(much of which is customer-facing).
This architecture is described in more detail later.
A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). This refers to the interaction between Xen, domain 0, and the guest VM’s kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and domain 0; this gives them better performance characteristics. HVM guests are not aware of their environment, and the hardware has to pretend that they are running on an unvirtualized machine. HVM guests do not need to modify the guest operating system, which is essential when running Windows.
In OpenStack, customer VMs may run in either PV or HVM mode.
However, the OpenStack domU (that’s the one running
nova-compute
) must be running in PV mode.
A basic OpenStack deployment on a XAPI-managed server, assuming that the network provider is nova-network, looks like this:
Key things to note:
Compute
service runs in a paravirtualized
virtual machine, on the host under management.
Each host runs a local instance of Compute
.
It is also running an instance of nova-network.Some notes on the networking:
Dom0
which
is used for Compute service. nova-compute
will create Linux bridges
for security group and neutron-openvswitch-agent
in Compute node will
apply security group rules on these Linux bridges. To implement this,
you need to remove /etc/modprobe.d/blacklist-bridge*
in Dom0
.Here are some of the resources available to learn more about Xen:
Before you can run OpenStack with XenServer, you must install the hypervisor on an appropriate server.
Note
Xen is a type 1 hypervisor: When your server starts, Xen is the first
software that runs. Consequently, you must install XenServer before you
install the operating system where you want to run OpenStack code. You then
install nova-compute
into a dedicated virtual machine on the host.
Use the following link to download XenServer’s installation media:
When you install many servers, you might find it easier to perform PXE boot installations. You can also package any post-installation changes that you want to make to your XenServer by following the instructions of creating your own XenServer supplemental pack.
Important
Make sure you use the EXT type of storage repository (SR). Features that require access to VHD files (such as copy on write, snapshot and migration) do not work when you use the LVM SR. Storage repository (SR) is a XAPI-specific term relating to the physical storage where virtual disks are stored.
On the XenServer installation screen, choose the
XenDesktop Optimized option. If you use an answer file, make
sure you use srtype="ext"
in the installation
tag of the answer file.
The following steps need to be completed after the hypervisor’s installation:
/images
directory on dom0./boot/guest
symlink/directory in dom0.nova-compute
.nova-compute
in the above virtual machine.When you use a XAPI managed hypervisor, you can install a Python script (or any executable) on the host side, and execute that through XenAPI. These scripts are called plug-ins. The OpenStack related XAPI plug-ins live in OpenStack Compute’s code repository. These plug-ins have to be copied to dom0’s filesystem, to the appropriate directory, where XAPI can find them. It is important to ensure that the version of the plug-ins are in line with the OpenStack Compute installation you are using.
The plugins should typically be copied from the Nova installation running in the Compute’s DomU, but if you want to download the latest version the following procedure can be used.
Manually installing the plug-ins
Create temporary files/directories:
$ NOVA_TARBALL=$(mktemp)
$ NOVA_SOURCES=$(mktemp -d)
Get the source from the openstack.org archives. The example assumes the master branch is used, and the XenServer host is accessible as xenserver. Match those parameters to your setup.
$ NOVA_URL=https://tarballs.openstack.org/nova/nova-master.tar.gz
$ wget -qO "$NOVA_TARBALL" "$NOVA_URL"
$ tar xvf "$NOVA_TARBALL" -d "$NOVA_SOURCES"
Copy the plug-ins to the hypervisor:
$ PLUGINPATH=$(find $NOVA_SOURCES -path '*/xapi.d/plugins' -type d -print)
$ tar -czf - -C "$PLUGINPATH" ./ |
> ssh root@xenserver tar -xozf - -C /etc/xapi.d/plugins
Remove temporary files/directories:</para>
$ rm "$NOVA_ZIPBALL"
$ rm -rf "$NOVA_SOURCES"
To support AMI type images in your OpenStack installation,
you must create the /boot/guest
directory on dom0.
One of the OpenStack XAPI plugins will extract the kernel and
ramdisk from AKI and ARI images and put them to that directory.
OpenStack maintains the contents of this directory and its size should not increase during normal operation. However, in case of power failures or accidental shutdowns, some files might be left over. To prevent these files from filling up dom0’s filesystem, set up this directory as a symlink that points to a subdirectory of the local SR.
Run these commands in dom0 to achieve this setup:
# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
# LOCALPATH="/var/run/sr-mount/$LOCAL_SR/os-guest-kernels"
# mkdir -p "$LOCALPATH"
# ln -s "$LOCALPATH" /boot/guest
To resize servers with XenServer you must:
Establish a root trust between all hypervisor nodes of your deployment:
To do so, generate an ssh key-pair with the ssh-keygen
command. Ensure that each of your dom0’s authorized_keys
file
(located in /root/.ssh/authorized_keys
) contains the public key
fingerprint (located in /root/.ssh/id_rsa.pub
).
Provide a /images
mount point to the dom0 for your hypervisor:
dom0 space is at a premium so creating a directory in dom0 is potentially
dangerous and likely to fail especially when you resize large servers.
The least you can do is to symlink /images
to your local storage SR.
The following instructions work for an English-based installation
of XenServer and in the case of ext3-based SR (with which the resize
functionality is known to work correctly).
# LOCAL_SR=$(xe sr-list name-label="Local storage" --minimal)
# IMG_DIR="/var/run/sr-mount/$LOCAL_SR/images"
# mkdir -p "$IMG_DIR"
# ln -s "$IMG_DIR" /images
The following section discusses some commonly changed options when using the XenAPI driver. The table below provides a complete reference of all configuration options available for configuring XAPI with OpenStack.
The recommended way to use XAPI with OpenStack is through the XenAPI driver.
To enable the XenAPI driver, add the following configuration options to
/etc/nova/nova.conf
and restart OpenStack Compute
:
compute_driver = xenapi.XenAPIDriver
[xenserver]
connection_url = http://your_xenapi_management_ip_address
connection_username = root
connection_password = your_password
These connection details are used by OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer management console, to your XenServer node.
Note
The connection_url
is generally the management network IP
address of the XenServer.
The agent is a piece of software that runs on the instances, and communicates with OpenStack. In case of the XenAPI driver, the agent communicates with OpenStack through XenStore (see the Xen Project Wiki for more information on XenStore).
If you don’t have the guest agent on your VMs, it takes a long time
for OpenStack Compute to detect that the VM has successfully started.
Generally a large timeout is required for Windows instances, but you may
want to adjust: agent_version_timeout
within the [xenserver]
section.
Assuming you are talking to XAPI through a management network, and
XenServer is on the address: 10.10.1.34 specify the same address
for the vnc proxy address: vncserver_proxyclient_address=10.10.1.34
You can specify which Storage Repository to use with nova by editing the following flag. To use the local-storage setup by the default installer:
sr_matching_filter = "other-config:i18n-key=local-storage"
Another alternative is to use the “default” storage (for example if you have attached NFS or any other shared storage):
sr_matching_filter = "default-sr:true"
tgz
compressed format¶To start uploading tgz
compressed raw disk images to the Image service,
configure xenapi_image_upload_handler
by replacing GlanceStore
with VdiThroughDevStore
.
xenapi_image_upload_handler=nova.virt.xenapi.image.vdi_through_dev.VdiThroughDevStore
As opposed to:
xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
To customize the XenAPI driver, use the configuration option settings documented in Description of Xen configuration options.
OpenStack Compute supports the Xen Project Hypervisor (or Xen). Xen can be integrated with OpenStack Compute via the libvirt toolstack or via the XAPI toolstack. This section describes how to set up OpenStack Compute with Xen and libvirt. For information on how to set up Xen with XAPI refer to XenServer (and other XAPI based Xen variants).
At this stage we recommend using the baseline that we use for the Xen Project OpenStack CI Loop, which contains the most recent stability fixes to both Xen and libvirt.
Xen 4.5.1 (or newer) and libvirt 1.2.15 (or newer) contain the minimum required OpenStack improvements for Xen. Although libvirt 1.2.15 works with Xen, libvirt 1.3.2 or newer is recommended. The necessary Xen changes have also been backported to the Xen 4.4.3 stable branch. Please check with the Linux and FreeBSD distros you are intending to use as Dom 0, whether the relevant version of Xen and libvirt are available as installable packages.
The latest releases of Xen and libvirt packages that fulfil the above minimum requirements for the various openSUSE distributions can always be found and installed from the Open Build Service Virtualization project. To install these latest packages, add the Virtualization repository to your software management stack and get the newest packages from there. More information about the latest Xen and libvirt packages are available here and here.
Alternatively, it is possible to use the Ubuntu LTS 14.04 Xen Package 4.4.1-0ubuntu0.14.04.4 (Xen 4.4.1) and apply the patches outlined here. You can also use the Ubuntu LTS 14.04 libvirt package 1.2.2 libvirt_1.2.2-0ubuntu13.1.7 as baseline and update it to libvirt version 1.2.15, or 1.2.14 with the patches outlined here applied. Note that this will require rebuilding these packages partly from source.
For further information and latest developments, you may want to consult the Xen Project’s mailing lists for OpenStack related issues and questions.
To enable Xen via libvirt, ensure the following options are set in
/etc/nova/nova.conf
on all hosts running the nova-compute
service.
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = xen
Use the following as a guideline for configuring Xen for use in OpenStack:
Dom0 memory: Set it between 1GB and 4GB by adding the following parameter to the Xen Boot Options in the grub.conf file.
dom0_mem=1024M
Note
The above memory limits are suggestions and should be based on the available compute host resources. For large hosts that will run many hundreds of instances, the suggested values may need to be higher.
Note
The location of the grub.conf file depends on the host Linux distribution that you are using. Please refer to the distro documentation for more details (see Dom 0 for more resources).
Dom0 vcpus: Set the virtual CPUs to 4 and employ CPU pinning by adding the following parameters to the Xen Boot Options in the grub.conf file.
dom0_max_vcpus=4 dom0_vcpus_pin
Note
Note that the above virtual CPU limits are suggestions and should be based on the available compute host resources. For large hosts, that will run many hundred of instances, the suggested values may need to be higher.
PV vs HVM guests: A Xen virtual machine can be paravirtualized (PV) or hardware virtualized (HVM). The virtualization mode determines the interaction between Xen, Dom 0, and the guest VM’s kernel. PV guests are aware of the fact that they are virtualized and will co-operate with Xen and Dom 0. The choice of virtualization mode determines performance characteristics. For an overview of Xen virtualization modes, see Xen Guest Types.
In OpenStack, customer VMs may run in either PV or HVM mode. The mode is a property of the operating system image used by the VM, and is changed by adjusting the image metadata stored in the Image service. The image metadata can be changed using the nova or glance commands.
To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM),
use nova or glance to set the vm_mode
property to hvm
.
To choose one of the HVM modes (HVM, HVM with PV Drivers or PVHVM), use one of the following two commands:
$ nova image-meta img-uuid set vm_mode=hvm
$ glance image-update img-uuid --property vm_mode=hvm
To chose PV mode, which is supported by NetBSD, FreeBSD and Linux, use one of the following two commands
$ nova image-meta img-uuid set vm_mode=xen
$ glance image-update img-uuid --property vm_mode=xen
Note
The default for virtualization mode in nova is PV mode.
Image formats: Xen supports raw, qcow2 and vhd image formats. For more information on image formats, refer to the OpenStack Virtual Image Guide and the Storage Options Guide on the Xen Project Wiki.
Image metadata: In addition to the vm_mode
property discussed
above, the hypervisor_type
property is another important component
of the image metadata, especially if your cloud contains mixed hypervisor
compute nodes. Setting the hypervisor_type
property allows the nova
scheduler to select a compute node running the specified hypervisor when
launching instances of the image. Image metadata such as vm_mode
,
hypervisor_type
, architecture, and others can be set when importing
the image to the Image service. The metadata can also be changed using
the nova or glance commands:
$ nova image-meta img-uuid set hypervisor_type=xen vm_mode=hvm
$ glance image-update img-uuid --property hypervisor_type=xen --property vm_mode=hvm
For more more information on image metadata, refer to the OpenStack Virtual Image Guide.
Libguestfs file injection: OpenStack compute nodes can use libguestfs to inject files into an instance’s image prior
to launching the instance. libguestfs uses libvirt’s QEMU driver to start a
qemu process, which is then used to inject files into the image. When using
libguestfs for file injection, the compute node must have the libvirt qemu
driver installed, in addition to the Xen driver. In RPM based distributions,
the qemu driver is provided by the libvirt-daemon-qemu
package. In
Debian and Ubuntu, the qemu driver is provided by the libvirt-bin
package.
To customize the libvirt driver, use the configuration option settings documented in Description of Xen configuration options.
Important log files: When an instance fails to start, or when you come across other issues, you should first consult the following log files:
/var/log/nova/compute.log
(for more information refer to Compute log files)./var/log/libvirt/libxl/libxl-driver.log
,/var/log/xen/qemu-dm-${instancename}.log
,/var/log/xen/xen-hotplug.log
,/var/log/xen/console/guest-${instancename}
(to enable see Enabling Guest Console Logs) andIf you need further help you can ask questions on the mailing lists xen-users@, wg-openstack@ or raise a bug against Xen.
The following section contains links to other useful resources.
LXC (also known as Linux containers) is a virtualization technology that works at the operating system level. This is different from hardware virtualization, the approach used by other hypervisors such as KVM, Xen, and VMware. LXC (as currently implemented using libvirt in the Compute service) is not a secure virtualization technology for multi-tenant environments (specifically, containers may affect resource quotas for other containers hosted on the same machine). Additional containment technologies, such as AppArmor, may be used to provide better isolation between containers, although this is not the case by default. For all these reasons, the choice of this virtualization technology is not recommended in production.
If your compute hosts do not have hardware support for virtualization, LXC will likely provide better performance than QEMU. In addition, if your guests must access specialized hardware, such as GPUs, this might be easier to achieve with LXC than other hypervisors.
Note
Some OpenStack Compute features might be missing when running with LXC as the hypervisor. See the hypervisor support matrix for details.
To enable LXC, ensure the following options are set in /etc/nova/nova.conf
on all hosts running the nova-compute
service.
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = lxc
On Ubuntu, enable LXC support in OpenStack by installing the
nova-compute-lxc
package.
OpenStack Compute supports the VMware vSphere product family and enables access to advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS).
This section describes how to configure VMware-based virtual machine images for launch. The VMware driver supports vCenter version 5.5.0 and later.
The VMware vCenter driver enables the nova-compute
service to communicate
with a VMware vCenter server that manages one or more ESX host clusters.
The driver aggregates the ESX hosts in each cluster to present one
large hypervisor entity for each cluster to the Compute scheduler.
Because individual ESX hosts are not exposed to the scheduler, Compute
schedules to the granularity of clusters and vCenter uses DRS to select
the actual ESX host within the cluster. When a virtual machine makes
its way into a vCenter cluster, it can use all vSphere features.
The following sections describe how to configure the VMware vCenter driver.
The following diagram shows a high-level view of the VMware driver architecture:
VMware driver architecture
As the figure shows, the OpenStack Compute Scheduler sees
three hypervisors that each correspond to a cluster in vCenter.
nova-compute
contains the VMware driver. You can run with multiple
nova-compute
services. It is recommended to run with one nova-compute
service per ESX cluster thus ensuring that while Compute schedules at the
granularity of the nova-compute
service it is also in effect able to
schedule at the cluster level. In turn the VMware driver inside
nova-compute
interacts with the vCenter APIs to select an appropriate ESX
host within the cluster. Internally, vCenter uses DRS for placement.
The VMware vCenter driver also interacts with the Image service to copy VMDK images from the Image service back-end store. The dotted line in the figure represents VMDK images being copied from the OpenStack Image service to the vSphere data store. VMDK images are cached in the data store so the copy operation is only required the first time that the VMDK image is used.
After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in vCenter and can access vSphere advanced features. At the same time, the VM is visible in the OpenStack dashboard and you can manage it as you would any other OpenStack VM. You can perform advanced vSphere operations in vCenter while you configure OpenStack resources such as VMs through the OpenStack dashboard.
The figure does not show how networking fits into the architecture.
Both nova-network
and the OpenStack Networking Service are supported.
For details, see Networking with VMware vSphere.
To get started with the VMware vCenter driver, complete the following high-level steps:
nova.conf
file.
See VMware vCenter driver.nova-network
or
the Networking service. See Networking with VMware vSphere.Use the following list to prepare a vSphere environment that runs with the VMware vCenter driver:
If you use the VMware driver with OpenStack Networking and the NSX
plug-in, security groups are supported. If you use nova-network
,
security groups are not supported.
Note
The NSX plug-in is the only plug-in that is validated for vSphere.
The port range 5900 - 6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control.
Note
In addition to the default VNC port numbers (5900 to 6000) specified in the above document, the following ports are also used: 6101, 6102, and 6105.
You must modify the ESXi firewall configuration to allow the VNC ports. Additionally, for the firewall modifications to persist after a reboot, you must create a custom vSphere Installation Bundle (VIB) which is then installed onto the running ESXi host or added to a custom image profile used to install ESXi hosts. For details about how to create a VIB for persisting the firewall configuration modifications, see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007381.
Note
The VIB can be downloaded from https://github.com/openstack-vmwareapi-team/Tools.
To use multiple vCenter installations with OpenStack, each vCenter must be assigned to a separate availability zone. This is required as the OpenStack Block Storage VMDK driver does not currently work across multiple vCenter installations.
OpenStack integration requires a vCenter service account with the
following minimum permissions. Apply the permissions to the Datacenter
root object, and select the Propagate to Child Objects option.
All Privileges | |||
---|---|---|---|
Datastore | |||
Allocate space | |||
Browse datastore | |||
Low level file operation | |||
Remove file | |||
Extension | |||
Register extension | |||
Folder | |||
Create folder | |||
Host | |||
Configuration | |||
Maintenance | |||
Network configuration | |||
Storage partition configuration | |||
Network | |||
Assign network | |||
Resource | |||
Assign virtual machine to resource pool | |||
Migrate powered off virtual machine | |||
Migrate powered on virtual machine | |||
Virtual Machine | |||
Configuration | |||
Add existing disk | |||
Add new disk | |||
Add or remove device | |||
Advanced | |||
CPU count | |||
Change resource | |||
Disk change tracking | |||
Host USB device | |||
Memory | |||
Modify device settings | |||
Raw device | |||
Remove disk | |||
Rename | |||
Swapfile placement | |||
Interaction | |||
Configure CD media | |||
Power Off | |||
Power On | |||
Reset | |||
Suspend | |||
Inventory | |||
Create from existing | |||
Create new | |||
Move | |||
Remove | |||
Unregister | |||
Provisioning | |||
Clone virtual machine | |||
Customize | |||
Create template from virtual machine | |||
Snapshot management | |||
Create snapshot | |||
Remove snapshot | |||
Sessions | |||
Validate session | |||
View and stop sessions | |||
vApp | |||
Export | |||
Import |
Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. This recommended configuration enables access through vCenter to advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS).
Add the following VMware-specific configuration options to the nova.conf
file:
[DEFAULT]
compute_driver = vmwareapi.VMwareVCDriver
[vmware]
host_ip = <vCenter hostname or IP address>
host_username = <vCenter username>
host_password = <vCenter password>
cluster_name = <vCenter cluster name>
datastore_regex = <optional datastore regex>
Note
datastore_regex
setting specifies the data stores
to use with Compute. For example, datastore_regex="nas.*"
selects all the data stores that have a name starting with “nas”.
If this line is omitted, Compute uses the first data store returned by
the vSphere API. It is recommended not to use this field and instead
remove data stores that are not intended for OpenStack.reserved_host_memory_mb
option value is
512 MB by default. However, VMware recommends that you set this option
to 0 MB because the vCenter driver reports the effective memory
available to the virtual machines.A nova-compute
service can control one or more clusters containing
multiple ESXi hosts, making nova-compute
a critical service from a
high availability perspective. Because the host that runs nova-compute
can fail while the vCenter and ESX still run, you must protect the
nova-compute
service against host failures.
Note
Many nova.conf
options are relevant to libvirt but do not apply
to this driver.
The vCenter driver supports images in the VMDK format. Disks in this
format can be obtained from VMware Fusion or from an ESX environment.
It is also possible to convert other formats, such as qcow2, to the VMDK
format using the qemu-img
utility. After a VMDK disk is available,
load it into the Image service. Then, you can use it with the VMware
vCenter driver. The following sections provide additional details on the
supported disks and the commands used for conversion and upload.
Upload images to the OpenStack Image service in VMDK format. The following VMDK disk types are supported:
VMFS Flat Disks
(includes thin, thick, zeroedthick, and
eagerzeroedthick). Note that once a VMFS thin disk is exported from VMFS
to a non-VMFS location, like the OpenStack Image service, it becomes a
preallocated flat disk. This impacts the transfer time from the Image
service to the data store when the full preallocated flat disk,
rather than the thin disk, must be transferred.Monolithic Sparse disks
. Sparse disks get imported from the Image
service into ESXi as thin provisioned disks. Monolithic Sparse disks
can be obtained from VMware Fusion or can be created by converting from
other virtual disk formats using the qemu-img
utility.Stream-optimized disks
. Stream-optimized disks are compressed sparse
disks. They can be obtained from VMware vCenter/ESXi when exporting vm
to ovf/ova template.The following table shows the vmware_disktype
property that applies
to each of the supported VMDK disk types:
vmware_disktype property | VMDK disk type |
---|---|
sparse | Monolithic Sparse |
thin | VMFS flat, thin provisioned |
preallocated (default) | VMFS flat, thick/zeroedthick/eagerzeroedthick |
streamOptimized | Compressed Sparse |
The vmware_disktype
property is set when an image is loaded into the
Image service. For example, the following command creates a Monolithic
Sparse image by setting vmware_disktype
to sparse
:
$ openstack image create \
--disk-format vmdk \
--container-format bare \
--property vmware_disktype="sparse" \
--property vmware_ostype="ubuntu64Guest" \
ubuntu-sparse < ubuntuLTS-sparse.vmdk
Note
Specifying thin
does not provide any advantage over preallocated
with the current version of the driver. Future versions might restore
the thin properties of the disk after it is downloaded to a vSphere
data store.
The following table shows the vmware_ostype
property that applies to
each of the supported guest OS:
vmware_ostype property | Retail Name |
---|---|
asianux3_64Guest | Asianux Server 3 (64 bit) |
asianux3Guest | Asianux Server 3 |
asianux4_64Guest | Asianux Server 4 (64 bit) |
asianux4Guest | Asianux Server 4 |
darwin64Guest | Darwin 64 bit |
darwinGuest | Darwin |
debian4_64Guest | Debian GNU/Linux 4 (64 bit) |
debian4Guest | Debian GNU/Linux 4 |
debian5_64Guest | Debian GNU/Linux 5 (64 bit) |
debian5Guest | Debian GNU/Linux 5 |
dosGuest | MS-DOS |
freebsd64Guest | FreeBSD x64 |
freebsdGuest | FreeBSD |
mandrivaGuest | Mandriva Linux |
netware4Guest | Novell NetWare 4 |
netware5Guest | Novell NetWare 5.1 |
netware6Guest | Novell NetWare 6.x |
nld9Guest | Novell Linux Desktop 9 |
oesGuest | Open Enterprise Server |
openServer5Guest | SCO OpenServer 5 |
openServer6Guest | SCO OpenServer 6 |
opensuse64Guest | openSUSE (64 bit) |
opensuseGuest | openSUSE |
os2Guest | OS/2 |
other24xLinux64Guest | Linux 2.4x Kernel (64 bit) (experimental) |
other24xLinuxGuest | Linux 2.4x Kernel |
other26xLinux64Guest | Linux 2.6x Kernel (64 bit) (experimental) |
other26xLinuxGuest | Linux 2.6x Kernel (experimental) |
otherGuest | Other Operating System |
otherGuest64 | Other Operating System (64 bit) (experimental) |
otherLinux64Guest | Linux (64 bit) (experimental) |
otherLinuxGuest | Other Linux |
redhatGuest | Red Hat Linux 2.1 |
rhel2Guest | Red Hat Enterprise Linux 2 |
rhel3_64Guest | Red Hat Enterprise Linux 3 (64 bit) |
rhel3Guest | Red Hat Enterprise Linux 3 |
rhel4_64Guest | Red Hat Enterprise Linux 4 (64 bit) |
rhel4Guest | Red Hat Enterprise Linux 4 |
rhel5_64Guest | Red Hat Enterprise Linux 5 (64 bit) (experimental) |
rhel5Guest | Red Hat Enterprise Linux 5 |
rhel6_64Guest | Red Hat Enterprise Linux 6 (64 bit) |
rhel6Guest | Red Hat Enterprise Linux 6 |
sjdsGuest | Sun Java Desktop System |
sles10_64Guest | SUSE Linux Enterprise Server 10 (64 bit) (experimental) |
sles10Guest | SUSE Linux Enterprise Server 10 |
sles11_64Guest | SUSE Linux Enterprise Server 11 (64 bit) |
sles11Guest | SUSE Linux Enterprise Server 11 |
sles64Guest | SUSE Linux Enterprise Server 9 (64 bit) |
slesGuest | SUSE Linux Enterprise Server 9 |
solaris10_64Guest | Solaris 10 (64 bit) (experimental) |
solaris10Guest | Solaris 10 (32 bit) (experimental) |
solaris6Guest | Solaris 6 |
solaris7Guest | Solaris 7 |
solaris8Guest | Solaris 8 |
solaris9Guest | Solaris 9 |
suse64Guest | SUSE Linux (64 bit) |
suseGuest | SUSE Linux |
turboLinux64Guest | Turbolinux (64 bit) |
turboLinuxGuest | Turbolinux |
ubuntu64Guest | Ubuntu Linux (64 bit) |
ubuntuGuest | Ubuntu Linux |
unixWare7Guest | SCO UnixWare 7 |
win2000AdvServGuest | Windows 2000 Advanced Server |
win2000ProGuest | Windows 2000 Professional |
win2000ServGuest | Windows 2000 Server |
win31Guest | Windows 3.1 |
win95Guest | Windows 95 |
win98Guest | Windows 98 |
windows7_64Guest | Windows 7 (64 bit) |
windows7Guest | Windows 7 |
windows7Server64Guest | Windows Server 2008 R2 (64 bit) |
winLonghorn64Guest | Windows Longhorn (64 bit) (experimental) |
winLonghornGuest | Windows Longhorn (experimental) |
winMeGuest | Windows Millennium Edition |
winNetBusinessGuest | Windows Small Business Server 2003 |
winNetDatacenter64Guest | Windows Server 2003, Datacenter Edition (64 bit) (experimental) |
winNetDatacenterGuest | Windows Server 2003, Datacenter Edition |
winNetEnterprise64Guest | Windows Server 2003, Enterprise Edition (64 bit) |
winNetEnterpriseGuest | Windows Server 2003, Enterprise Edition |
winNetStandard64Guest | Windows Server 2003, Standard Edition (64 bit) |
winNetEnterpriseGuest | Windows Server 2003, Enterprise Edition |
winNetStandard64Guest | Windows Server 2003, Standard Edition (64 bit) |
winNetStandardGuest | Windows Server 2003, Standard Edition |
winNetWebGuest | Windows Server 2003, Web Edition |
winNTGuest | Windows NT 4 |
winVista64Guest | Windows Vista (64 bit) |
winVistaGuest | Windows Vista |
winXPHomeGuest | Windows XP Home Edition |
winXPPro64Guest | Windows XP Professional Edition (64 bit) |
winXPProGuest | Windows XP Professional |
Using the qemu-img
utility, disk images in several formats (such as,
qcow2) can be converted to the VMDK format.
For example, the following command can be used to convert a qcow2 Ubuntu Trusty cloud image:
$ qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img \
-O vmdk trusty-server-cloudimg-amd64-disk1.vmdk
VMDK disks converted through qemu-img
are always
monolithic sparse
VMDK disks with an IDE adapter type. Using the previous example of the
Ubuntu Trusty image after the qemu-img
conversion, the command to
upload the VMDK disk should be something like:
$ openstack image create \
--container-format bare --disk-format vmdk \
--property vmware_disktype="sparse" \
--property vmware_adaptertype="ide" \
trusty-cloud < trusty-server-cloudimg-amd64-disk1.vmdk
Note that the vmware_disktype
is set to sparse
and the
vmware_adaptertype
is set to ide
in the previous command.
If the image did not come from the qemu-img
utility, the
vmware_disktype
and vmware_adaptertype
might be different.
To determine the image adapter type from an image file, use the
following command and look for the ddb.adapterType=
line:
$ head -20 <vmdk file name>
Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk:
$ openstack image create \
--disk-format vmdk \
--container-format bare \
--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property vmware_ostype="ubuntu64Guest" \
ubuntu-thick-scsi < ubuntuLTS-flat.vmdk
Currently, OS boot VMDK disks with an IDE adapter type cannot be attached
to a virtual SCSI controller and likewise disks with one of the SCSI
adapter types (such as, busLogic, lsiLogic, lsiLogicsas, paraVirtual)
cannot be attached to the IDE controller. Therefore, as the previous
examples show, it is important to set the vmware_adaptertype
property
correctly. The default adapter type is lsiLogic, which is SCSI, so you can
omit the vmware_adaptertype
property if you are certain that the image
adapter type is lsiLogic.
In a mixed hypervisor environment, OpenStack Compute uses the
hypervisor_type
tag to match images to the correct hypervisor type.
For VMware images, set the hypervisor type to vmware
.
Other valid hypervisor types include:
hyperv
, ironic
, lxc
, qemu
, uml
, and xen
.
Note that qemu
is used for both QEMU and KVM hypervisor types.
$ openstack image create \
--disk-format vmdk \
--container-format bare \
--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property hypervisor_type="vmware" \
--property vmware_ostype="ubuntu64Guest" \
ubuntu-thick-scsi < ubuntuLTS-flat.vmdk
Monolithic Sparse disks are considerably faster to download but have the overhead of an additional conversion step. When imported into ESX, sparse disks get converted to VMFS flat thin provisioned disks. The download and conversion steps only affect the first launched instance that uses the sparse disk image. The converted disk image is cached, so subsequent instances that use this disk image can simply use the cached version.
To avoid the conversion step (at the cost of longer download times) consider converting sparse disks to thin provisioned or preallocated disks before loading them into the Image service.
Use one of the following tools to pre-convert sparse disks.
Sometimes called the remote CLI or rCLI.
Assuming that the sparse disk is made available on a data store accessible by an ESX host, the following command converts it to preallocated format:
vmkfstools --server=ip_of_some_ESX_host -i \
/vmfs/volumes/datastore1/sparse.vmdk \
/vmfs/volumes/datastore1/converted.vmdk
Note that the vifs tool from the same CLI package can be used to upload the disk to be converted. The vifs tool can also be used to download the converted disk if necessary.
If the SSH service is enabled on an ESX host, the sparse disk can be uploaded to the ESX data store through scp and the vmkfstools local to the ESX host can use used to perform the conversion. After you log in to the host through ssh, run this command:
vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk
vmware-vdiskmanager
is a utility that comes bundled with VMware
Fusion and VMware Workstation. The following example converts a sparse
disk to preallocated format:
'/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk
In the previous cases, the converted vmdk is actually a pair of files:
converted.vmdk
.converted-flat.vmdk
.The file to be uploaded to the Image service is converted-flat.vmdk
.
The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP from the Image service to a data store that is visible to the hypervisor. To optimize this process, the first time a VMDK file is used, it gets cached in the data store. A cached image is stored in a folder named after the image ID. Subsequent virtual machines that need the VMDK use the cached version and don’t have to copy the file again from the Image service.
Even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared data store. To avoid this copy, boot the image in linked_clone mode. To learn how to enable this mode, see Configuration reference.
Note
You can also use the img_linked_clone
property (or legacy property
vmware_linked_clone
) in the Image service to override the linked_clone
mode on a per-image basis.
If spawning a virtual machine image from ISO with a VMDK disk,
the image is created and attached to the virtual machine as a blank disk.
In that case img_linked_clone
property for the image is just ignored.
If multiple compute nodes are running on the same host, or have a shared
file system, you can enable them to use the same cache folder on the back-end
data store. To configure this action, set the cache_prefix
option in the
nova.conf
file. Its value stands for the name prefix of the folder where
cached images are stored.
Note
This can take effect only if compute nodes are running on the same host, or have a shared file system.
You can automatically purge unused images after a specified period of time.
To configure this action, set these options in the DEFAULT
section in
the nova.conf
file:
True
to specify that unused images should
be removed after the duration specified in the
remove_unused_original_minimum_age_seconds
option.
The default is True
.86400
(24 hours).The VMware driver supports networking with the nova-network
service
or the Networking Service. Depending on your installation,
complete these configuration steps before you provision VMs:
The nova-network service with the FlatManager or FlatDHCPManager.
Create a port group with the same name as the flat_network_bridge
value in the nova.conf
file. The default value is br100
.
If you specify another value, the new value must be a valid Linux bridge
identifier that adheres to Linux bridge naming conventions.
All VM NICs are attached to this port group.
Ensure that the flat interface of the node that runs the nova-network
service has a path to this network.
Note
When configuring the port binding for this port group in vCenter,
specify ephemeral
for the port binding type. For more information,
see Choosing a port binding type in ESX/ESXi in the VMware Knowledge Base.
The nova-network service with the VlanManager.
Set the vlan_interface
configuration option to match the ESX host
interface that handles VLAN-tagged VM traffic.
OpenStack Compute automatically creates the corresponding port groups.
If you are using the OpenStack Networking Service:
Before provisioning VMs, create a port group with the same name as the
vmware.integration_bridge
value in nova.conf
(default is
br-int
). All VM NICs are attached to this port group for management
by the OpenStack Networking plug-in.
The VMware driver supports attaching volumes from the Block Storage service. The VMware VMDK driver for OpenStack Block Storage is recommended and should be used for managing volumes based on vSphere data stores. For more information about the VMware VMDK driver, see VMware VMDK driver. Also an iSCSI volume driver provides limited support and can be used only for attachments.
To customize the VMware driver, use the configuration option settings documented in Description of VMware configuration options.
It is possible to use Hyper-V as a compute node within an OpenStack
Deployment. The nova-compute
service runs as openstack-compute
,
a 32-bit service directly upon the Windows platform with the Hyper-V
role enabled. The necessary Python components as well as the
nova-compute
service are installed directly onto the Windows
platform. Windows Clustering Services are not needed for functionality
within the OpenStack infrastructure.
The use of the Windows Server 2012 platform is recommend for the best
experience and is the platform for active development.
The following Windows platforms have been tested as compute nodes:
The only OpenStack services required on a Hyper-V node are nova-compute
and neutron-hyperv-agent
. Regarding the resources needed for this
host you have to consider that Hyper-V will require 16 GB - 20 GB of
disk space for the OS itself, including updates. Two NICs are required,
one connected to the management network and one to the guest data network.
The following sections discuss how to prepare the Windows Hyper-V node for operation as an OpenStack compute node. Unless stated otherwise, any configuration information should work for the Windows 2012 and 2012 R2 platforms.
The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for all, or partition it into an OS volume and VM volume.
Network time services must be configured to ensure proper operation of the OpenStack nodes. To set network time on your Windows host you must run the following commands:
C:\>net stop w32time
C:\>w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL
C:\>net start w32time
Keep in mind that the node will have to be time synchronized with the other nodes of your OpenStack environment, so it is important to use the same NTP server. Note that in case of an Active Directory environment, you may do this only for the AD Domain Controller.
Information regarding the Hyper-V virtual Switch can be located here: http://technet.microsoft.com/en-us/library/hh831823.aspx
To quickly enable an interface to be used as a Virtual Interface the following PowerShell may be used:
PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
Note
It is very important to make sure that when you are using a Hyper-V
node with only 1 NIC the -AllowManagementOS option is set on True
,
otherwise you will lose connectivity to the Hyper-V node.
To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and started automatically.
PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
PS C:\> Start-Service MSiSCSI
To enable ‘shared nothing live’ migration, run the 3 PowerShell instructions below on each Hyper-V host:
PS C:\> Enable-VMMigration
PS C:\> Set-VMMigrationNetwork IP_ADDRESS
PS C:\> Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos
Note
Please replace the IP_ADDRESS
with the address of the interface
which will provide live migration.
This article clarifies the various live migration options in Hyper-V:
http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html
In case you want to avoid all the manual setup, you can use Cloudbase Solutions’ installer. You can find it here:
https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi
The tool installs an independent Python environment in order to avoid
conflicts with existing applications, and dynamically generates a
nova.conf
file based on the parameters provided by you.
The tool can also be used for an automated and unattended mode for deployments on a massive number of servers. More details about how to use the installer and its features can be found here:
Python 2.7 32bit must be installed as most of the libraries are not working properly on the 64bit version.
Setting up Python prerequisites
Download and install Python 2.7 using the MSI installer from here:
http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi
PS C:\> $src = "http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi"
PS C:\> $dest = "$env:temp\python-2.7.3.msi"
PS C:\> Invoke-WebRequest –Uri $src –OutFile $dest
PS C:\> Unblock-File $dest
PS C:\> Start-Process $dest
Make sure that the Python
and Python\Scripts
paths are set up
in the PATH
environment variable.
PS C:\> $oldPath = [System.Environment]::GetEnvironmentVariable("Path")
PS C:\> $newPath = $oldPath + ";C:\python27\;C:\python27\Scripts\"
PS C:\> [System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User
The following packages need to be downloaded and manually installed:
The following packages must be installed with pip:
PS C:\> pip install ecdsa
PS C:\> pip install amqp
PS C:\> pip install wmi
qemu-img
is required for some of the image related operations.
You can get it from here: http://qemu.weilnetz.de/.
You must make sure that the qemu-img
path is set in the
PATH environment variable.
Some Python packages need to be compiled, so you may use MinGW or
Visual Studio. You can get MinGW from here:
http://sourceforge.net/projects/mingw/.
You must configure which compiler is to be used for this purpose by using the
distutils.cfg
file in $Python27\Lib\distutils
, which can contain:
[build]
compiler = mingw32
As a last step for setting up MinGW, make sure that the MinGW binaries’ directories are set up in PATH.
Use Git to download the necessary source code. The installer to run Git on Windows can be downloaded here:
Download the installer. Once the download is complete, run the installer and follow the prompts in the installation wizard. The default should be acceptable for the purposes of this guide.
PS C:\> $src = "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe"
PS C:\> $dest = "$env:temp\Git-1.9.2-preview20140411.exe"
PS C:\> Invoke-WebRequest –Uri $src –OutFile $dest
PS C:\> Unblock-File $dest
PS C:\> Start-Process $dest
Run the following to clone the nova code.
PS C:\> git.exe clone https://git.openstack.org/openstack/nova
To install nova-compute
, run:
PS C:\> cd c:\nova
PS C:\> python setup.py install
The nova.conf
file must be placed in C:\etc\nova
for running
OpenStack on Hyper-V. Below is a sample nova.conf
for Windows:
[DEFAULT]
auth_strategy = keystone
image_service = nova.image.glance.GlanceImageService
compute_driver = nova.virt.hyperv.driver.HyperVDriver
volume_api_class = nova.volume.cinder.API
fake_network = true
instances_path = C:\Program Files (x86)\OpenStack\Instances
glance_api_servers = IP_ADDRESS:9292
use_cow_images = true
force_config_drive = false
injected_network_template = C:\Program Files (x86)\OpenStack\Nova\etc\interfaces.template
policy_file = C:\Program Files (x86)\OpenStack\Nova\etc\policy.json
mkisofs_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\mkisofs.exe
allow_resize_to_same_host = true
running_deleted_instance_action = reap
running_deleted_instance_poll_interval = 120
resize_confirm_window = 5
resume_guests_state_on_host_boot = true
rpc_response_timeout = 1800
lock_path = C:\Program Files (x86)\OpenStack\Log\
rpc_backend = nova.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = Passw0rd
logdir = C:\Program Files (x86)\OpenStack\Log\
logfile = nova-compute.log
instance_usage_audit = true
instance_usage_audit_period = hour
use_neutron = True
[neutron]
url = http://IP_ADDRESS:9696
auth_strategy = keystone
admin_tenant_name = service
admin_username = neutron
admin_password = Passw0rd
admin_auth_url = http://IP_ADDRESS:35357/v2.0
[hyperv]
vswitch_name = newVSwitch0
limit_cpu_features = false
config_drive_inject_password = false
qemu_img_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\qemu-img.exe
config_drive_cdrom = true
dynamic_memory_ratio = 1
enable_instance_metrics_collection = true
[rdp]
enabled = true
html5_proxy_base_url = https://IP_ADDRESS:4430
The table Description of HyperV configuration options contains a reference of all options for hyper-v.
Hyper-V currently supports only the VHD and VHDX file format for virtual machine instances. Detailed instructions for installing virtual machines on Hyper-V can be found here:
http://technet.microsoft.com/en-us/library/cc772480.aspx
Once you have successfully created a virtual machine, you can then upload the image to glance using the native glance-client:
PS C:\> glance image-create --name "VM_IMAGE_NAME" --is-public False
--container-format bare --disk-format vhd
Note
VHD and VHDX files sizes can be bigger than their maximum internal size, as such you need to boot instances using a flavor with a slightly bigger disk size than the internal size of the disk file. To create VHDs, use the following PowerShell cmdlet:
PS C:\> New-VHD DISK_NAME.vhd -SizeBytes VHD_SIZE
The interfaces.template
file describes the network interfaces and routes
available on your system and how to activate them. You can specify the
location of the file with the injected_network_template
configuration
option in /etc/nova/nova.conf
.
injected_network_template = PATH_TO_FILE
A default template exists in nova/virt/interfaces.template
.
To start the nova-compute
service, run this command from a console
in the Windows server:
PS C:\> C:\Python27\python.exe c:\Python27\Scripts\nova-compute --config-file c:\etc\nova\nova.conf
I ran the nova-manage service list command from my controller; however, I’m not seeing smiley faces for Hyper-V compute nodes, what do I do?
Verify that you are synchronized with a network time source. For instructions about how to configure NTP on your Hyper-V compute node, see Configure NTP.
How do I restart the compute service?
PS C:\> net stop nova-compute && net start nova-compute
How do I restart the iSCSI initiator service?
PS C:\> net stop msiscsi && net start msiscsi
Virtuozzo, or its community edition OpenVZ, provides both types of
virtualization: Kernel Virtual Machines and OS Containers. The type
of instance to span is chosen depending on the hw_vm_type
property of an image.
Note
Some OpenStack Compute features may be missing when running with Virtuozzo as the hypervisor. See the hypervisor support matrix for details.
To enable Virtuozzo Containers, set the following options in
/etc/nova/nova.conf
on all hosts running the nova-compute
service.
compute_driver = libvirt.LibvirtDriver
force_raw_images = False
[libvirt]
virt_type = parallels
images_type = ploop
connection_uri = parallels:///system
inject_partition = -2
To enable Virtuozzo Virtual Machines, set the following options in
/etc/nova/nova.conf
on all hosts running the nova-compute
service.
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = parallels
images_type = qcow2
connection_uri = parallels:///system
OpenStack Compute supports many hypervisors, which might make it difficult for you to choose one. Most installations use only one hypervisor. However, you can use ComputeFilter and ImagePropertiesFilter to schedule different hypervisors within the same installation. The following links help you choose a hypervisor. See http://docs.openstack.org/developer/nova/support-matrix.html for a detailed list of features and support across the hypervisors.
The following hypervisors are supported:
nova-compute
to run Linux, Windows, FreeBSD and NetBSD virtual machines.nova-compute
service in a para-virtualized VM.nova-compute
natively on the Windows virtualization platform.Compute uses the nova-scheduler
service to determine how to
dispatch compute requests. For example, the nova-scheduler
service determines on which host a VM should launch.
In the context of filters, the term host
means a physical
node that has a nova-compute
service running on it.
You can configure the scheduler through a variety of options.
Compute is configured with the following default scheduler
options in the /etc/nova/nova.conf
file:
scheduler_driver_task_period = 60
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
By default, the scheduler_driver
is configured as a filter scheduler,
as described in the next section. In the default configuration,
this scheduler considers hosts that meet all the following criteria:
RetryFilter
).AvailabilityZoneFilter
).RamFilter
).DiskFilter
).ComputeFilter
).ComputeCapabilitiesFilter
).ImagePropertiesFilter
).ServerGroupAntiAffinityFilter
).ServerGroupAffinityFilter
).The scheduler caches its list of available hosts;
use the scheduler_driver_task_period
option to specify
how often the list is updated.
Note
Do not configure service_down_time
to be much smaller than
scheduler_driver_task_period
; otherwise, hosts appear to
be dead while the host list is being cached.
For information about the volume scheduler, see the Block Storage section of OpenStack Administrator Guide.
The scheduler chooses a new host when an instance is migrated.
When evacuating instances from a host, the scheduler service honors the target host defined by the administrator on the nova evacuate command. If a target is not defined by the administrator, the scheduler determines the target host. For information about instance evacuation, see Evacuate instances section of the OpenStack Administrator Guide.
The filter scheduler (nova.scheduler.filter_scheduler.FilterScheduler
)
is the default scheduler for scheduling virtual machine instances.
It supports filtering and weighting to make informed decisions on
where a new instance should be created.
When the filter scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to decide which hosts to use for that request, described in the Weights section.
Filtering
The scheduler_available_filters
configuration option in nova.conf
provides the Compute service with the list of the filters that are used
by the scheduler. The default setting specifies all of the filter that
are included with the Compute service:
scheduler_available_filters = nova.scheduler.filters.all_filters
This configuration option can be specified multiple times.
For example, if you implemented your own custom filter in Python called
myfilter.MyFilter
and you wanted to use both the built-in filters
and your custom filter, your nova.conf
file would contain:
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = myfilter.MyFilter
The scheduler_default_filters
configuration option in nova.conf
defines the list of filters that are applied by the nova-scheduler
service. The default filters are:
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
The following sections describe the available compute filters.
Filters host by CPU core numbers with a per-aggregate
cpu_allocation_ratio
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and more than
one value is found, the minimum value will be used.
For information about how to use this filter,
see Host aggregates and availability zones. See also CoreFilter.
Filters host by disk allocation with a per-aggregate
disk_allocation_ratio
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and more than
one value is found, the minimum value will be used.
For information about how to use this filter,
see Host aggregates and availability zones. See also DiskFilter.
Matches properties defined in an image’s metadata against those of aggregates to determine host matches:
For example, the following aggregate myWinAgg
has the
Windows operating system as metadata (named ‘windows’):
$ nova aggregate-details MyWinAgg
+----+----------+-------------------+------------+---------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+----------+-------------------+------------+---------------+
| 1 | MyWinAgg | None | 'sf-devel' | 'os=windows' |
+----+----------+-------------------+------------+---------------+
In this example, because the following Win-2012 image has the
windows
property, it boots on the sf-devel
host
(all other filters being equal):
$ glance image-show Win-2012
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| Property 'os' | windows |
| checksum | f8a2eeee2dc65b3d9b6e63678955bd83 |
| container_format | ami |
| created_at | 2013-11-14T13:24:25 |
| ...
You can configure the AggregateImagePropertiesIsolation
filter by using the following options in the nova.conf
file:
# Considers only keys matching the given namespace (string).
# Multiple values can be given, as a comma-separated list.
aggregate_image_properties_isolation_namespace = <None>
# Separator used between the namespace and keys (string).
aggregate_image_properties_isolation_separator = .
Matches properties defined in extra specs for an instance type
against admin-defined properties on a host aggregate.
Works with specifications that are scoped with
aggregate_instance_extra_specs
.
Multiple values can be given, as a comma-separated list.
For backward compatibility, also works with non-scoped specifications;
this action is highly discouraged because it conflicts with
ComputeCapabilitiesFilter filter when you enable both filters.
For information about how to use this filter, see the
Host aggregates and availability zones section.
Filters host by disk allocation with a per-aggregate
max_io_ops_per_host
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and more than one
value is found, the minimum value will be used.
For information about how to use this filter,
see Host aggregates and availability zones. See also IoOpsFilter.
Ensures that the tenant (or list of tenants) creates all instances only
on specific Host aggregates and availability zones. If a host is in an aggregate that has
the filter_tenant_id
metadata key, the host creates instances from only
that tenant or list of tenants. A host can be in different aggregates. If a
host does not belong to an aggregate with the metadata key, the host can
create instances from all tenants. This setting does not isolate the
aggregate from other tenants. Any other tenant can continue to build
instances on the specified aggregate.
Filters host by number of instances with a per-aggregate
max_instances_per_host
value. If the per-aggregate value
is not found, the value falls back to the global setting.
If the host is in more than one aggregate and thus more than
one value is found, the minimum value will be used.
For information about how to use this filter, see Host aggregates and availability zones.
See also NumInstancesFilter.
Filters host by RAM allocation of instances with a per-aggregate
ram_allocation_ratio
value. If the per-aggregate value is not
found, the value falls back to the global setting.
If the host is in more than one aggregate and thus more than
one value is found, the minimum value will be used.
For information about how to use this filter, see Host aggregates and availability zones.
See also RamFilter.
This filter passes hosts if no instance_type
key is set or the
instance_type
aggregate metadata value contains the name of the
instance_type
requested. The value of the instance_type
metadata entry is a string that may contain either a single
instance_type
name or a comma-separated list of instance_type
names, such as m1.nano
or m1.nano,m1.small
.
For information about how to use this filter, see Host aggregates and availability zones.
See also TypeAffinityFilter.
This is a no-op filter. It does not eliminate any of the available hosts.
Filters hosts by availability zone. You must enable this filter for the scheduler to respect availability zones in requests.
Matches properties defined in extra specs for an instance type
against compute capabilities. If an extra specs key contains
a colon (:
), anything before the colon is treated as a namespace
and anything after the colon is treated as the key to be matched.
If a namespace is present and is not capabilities
, the filter
ignores the namespace. For backward compatibility, also treats the
extra specs key as the key to be matched if no namespace is present;
this action is highly discouraged because it conflicts with
AggregateInstanceExtraSpecsFilter filter when you enable both filters.
Passes all hosts that are operational and enabled.
In general, you should always enable this filter.
Only schedules instances on hosts if sufficient CPU cores are available. If this filter is not set, the scheduler might over-provision a host based on cores. For example, the virtual cores running on an instance may exceed the physical cores.
You can configure this filter to enable a fixed amount of vCPU
overcommitment by using the cpu_allocation_ratio
configuration
option in nova.conf
. The default setting is:
cpu_allocation_ratio = 16.0
With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU to be run on that node.
To disallow vCPU overcommitment set:
cpu_allocation_ratio = 1.0
Note
The Compute API always returns the actual number of CPU cores available
on a compute node regardless of the value of the cpu_allocation_ratio
configuration key. As a result changes to the cpu_allocation_ratio
are not reflected via the command line clients or the dashboard.
Changes to this configuration key are only taken into account internally
in the scheduler.
Schedules the instance on a different host from a set of instances.
To take advantage of this filter, the requester must pass a scheduler hint,
using different_host
as the key and a list of instance UUIDs as
the value. This filter is the opposite of the SameHostFilter
.
Using the nova command-line client, use the --hint
flag.
For example:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
--hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
--hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the os:scheduler_hints
key. For example:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"different_host": [
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
"8c19174f-4220-44f0-824a-cd1eeef10287"
]
}
}
Only schedules instances on hosts if there is sufficient disk space available for root and ephemeral storage.
You can configure this filter to enable a fixed amount of disk
overcommitment by using the disk_allocation_ratio
configuration
option in the nova.conf
configuration file.
The default setting disables the possibility of the overcommitment
and allows launching a VM only if there is a sufficient amount of
disk space available on a host:
disk_allocation_ratio = 1.0
DiskFilter always considers the value of the disk_available_least
property and not the one of the free_disk_gb
property of
a hypervisor’s statistics:
$ nova hypervisor-stats
+----------------------+-------+
| Property | Value |
+----------------------+-------+
| count | 1 |
| current_workload | 0 |
| disk_available_least | 29 |
| free_disk_gb | 35 |
| free_ram_mb | 3441 |
| local_gb | 35 |
| local_gb_used | 0 |
| memory_mb | 3953 |
| memory_mb_used | 512 |
| running_vms | 0 |
| vcpus | 2 |
| vcpus_used | 0 |
+----------------------+-------+
As it can be viewed from the command output above, the amount of the
available disk space can be less than the amount of the free disk space.
It happens because the disk_available_least
property accounts
for the virtual size rather than the actual size of images.
If you use an image format that is sparse or copy on write so that each
virtual instance does not require a 1:1 allocation of a virtual disk to a
physical storage, it may be useful to allow the overcommitment of disk space.
To enable scheduling instances while overcommitting disk resources on the
node, adjust the value of the disk_allocation_ratio
configuration
option to greater than 1.0
:
disk_allocation_ratio > 1.0
Note
If the value is set to >1
, we recommend keeping track of the free
disk space, as the value approaching 0
may result in the incorrect
functioning of instances using it at the moment.
Only schedules instances on hosts if host has the exact number of CPU cores.
Only schedules instances on hosts if host has the exact amount of disk available.
Only schedules instances on hosts if host has the exact number of RAM available.
Only schedules instances on hosts if host has the exact number of CPU cores.
Note
This filter is deprecated in favor of ServerGroupAffinityFilter.
The GroupAffinityFilter ensures that an instance is scheduled on to a host
from a set of group hosts. To take advantage of this filter, the requester
must pass a scheduler hint, using group
as the key and an arbitrary name
as the value. Using the nova command-line client,
use the --hint
flag. For example:
$ nova boot --image IMAGE_ID --flavor 1 --hint group=foo server-1
This filter should not be enabled at the same time as GroupAntiAffinityFilter or neither filter will work properly.
Note
This filter is deprecated in favor of ServerGroupAntiAffinityFilter.
The GroupAntiAffinityFilter ensures that each instance in a group is on
a different host. To take advantage of this filter, the requester must
pass a scheduler hint, using group
as the key and an arbitrary name
as the value. Using the nova command-line client,
use the --hint
flag. For example:
$ nova boot --image IMAGE_ID --flavor 1 --hint group=foo server-1
This filter should not be enabled at the same time as GroupAffinityFilter or neither filter will work properly.
Filters hosts based on properties defined on the instance’s image. It passes hosts that can support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, hypervisor version (for Xen hypervisor type only), and virtual machine mode.
For example, an instance might require a host that runs an ARM-based processor, and QEMU as the hypervisor. You can decorate an image with these properties by using:
$ glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu
The image properties that the filter checks for are:
i686
, x86_64
, arm
, and ppc64
.describes the hypervisor required by the image.
Examples are xen
, qemu
, and xenapi
.
Note
qemu
is used for both QEMU and KVM hypervisor types.
describes the hypervisor version required by the image. The property is supported for Xen hypervisor type only. It can be used to enable support for multiple hypervisor versions, and to prevent instances with newer Xen tools from being provisioned on an older version of a hypervisor. If available, the property value is compared to the hypervisor version of the compute host.
To filter the hosts by the hypervisor version, add the
hypervisor_version_requires
property on the image as metadata and
pass an operator and a required hypervisor version as its value:
$ glance image-update img-uuid --property hypervisor_type=xen --property hypervisor_version_requires=">=4.3"
xen
for Xen 3.0 paravirtual ABI,
hvm
for native ABI, uml
for User Mode Linux paravirtual ABI,
exe
for container virt executable ABI.Allows the admin to define a special (isolated) set of images and a special
(isolated) set of hosts, such that the isolated images can only run on
the isolated hosts, and the isolated hosts can only run isolated images.
The flag restrict_isolated_hosts_to_isolated_images
can be used to
force isolated hosts to only run isolated images.
The admin must specify the isolated set of images and hosts in the
nova.conf
file using the isolated_hosts
and isolated_images
configuration options. For example:
isolated_hosts = server1, server2
isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
The IoOpsFilter filters hosts by concurrent I/O operations on it.
Hosts with too many concurrent I/O operations will be filtered out.
The max_io_ops_per_host
option specifies the maximum number of
I/O intensive instances allowed to run on a host.
A host will be ignored by the scheduler if more than
max_io_ops_per_host
instances in build, resize, snapshot,
migrate, rescue or unshelve task states are running on it.
The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON format. The following operators are supported:
The filter supports the following variables:
$free_ram_mb
$free_disk_mb
$total_usable_ram_mb
$vcpus_total
$vcpus_used
Using the nova command-line client, use the --hint
flag:
$ nova boot --image 827d564a-e636-4fc4-a376-d36f7ebe1747 \
--flavor 1 --hint query='[">=","$free_ram_mb",1024]' server1
With the API, use the os:scheduler_hints
key:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"query": "[>=,$free_ram_mb,1024]"
}
}
Filters hosts based on meters weight_setting
.
Only hosts with the available meters are passed so that
the metrics weigher will not fail due to these hosts.
Filters hosts based on the NUMA topology that was specified for the
instance through the use of flavor extra_specs
in combination
with the image properties, as described in detail in the
related nova-spec document.
Filter will try to match the exact NUMA cells of the instance to
those of the host. It will consider the standard over-subscription
limits each cell, and provide limits to the compute host accordingly.
Note
If instance has no topology defined, it will be considered for any host. If instance has a topology defined, it will be considered only for NUMA capable hosts.
Hosts that have more instances running than specified by the
max_instances_per_host
option are filtered out when this filter
is in place.
The filter schedules instances on a host if the host has devices that
meet the device requests in the extra_specs
attribute for the flavor.
Only schedules instances on hosts that have sufficient RAM available. If this filter is not set, the scheduler may over provision a host based on RAM (for example, the RAM allocated by virtual machine instances may exceed the physical RAM).
You can configure this filter to enable a fixed amount of RAM
overcommitment by using the ram_allocation_ratio
configuration
option in nova.conf
. The default setting is:
ram_allocation_ratio = 1.5
This setting enables 1.5 GB instances to run on any compute node with 1 GB of free RAM.
Filters out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to respond to the request, this filter prevents the scheduler from retrying that host for the service request.
This filter is only useful if the scheduler_max_attempts
configuration option is set to a value greater than zero.
Schedules the instance on the same host as another instance in a set
of instances. To take advantage of this filter, the requester must
pass a scheduler hint, using same_host
as the key and a
list of instance UUIDs as the value.
This filter is the opposite of the DifferentHostFilter
.
Using the nova command-line client, use the --hint
flag:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
--hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
--hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the os:scheduler_hints
key:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"same_host": [
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
"8c19174f-4220-44f0-824a-cd1eeef10287"
]
}
}
The ServerGroupAffinityFilter ensures that an instance is scheduled
on to a host from a set of group hosts. To take advantage of this filter,
the requester must create a server group with an affinity
policy,
and pass a scheduler hint, using group
as the key and the server
group UUID as the value.
Using the nova command-line tool, use the --hint
flag.
For example:
$ nova server-group-create --policy affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID server-1
The ServerGroupAntiAffinityFilter ensures that each instance in a group is
on a different host. To take advantage of this filter, the requester must
create a server group with an anti-affinity
policy, and pass a scheduler
hint, using group
as the key and the server group UUID as the value.
Using the nova command-line client, use the --hint
flag.
For example:
$ nova server-group-create --policy anti-affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID server-1
Schedules the instance based on host IP subnet range. To take advantage of this filter, the requester must specify a range of valid IP address in CIDR format, by passing two scheduler hints:
192.168.1.1
)/24
)Using the nova command-line client, use the --hint
flag.
For example, to specify the IP subnet 192.168.1.1/24
:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
--hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1
With the API, use the os:scheduler_hints
key:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"build_near_host_ip": "192.168.1.1",
"cidr": "24"
}
}
Filters hosts based on their trust. Only passes hosts that meet the trust requirements specified in the instance properties.
Dynamically limits hosts to one instance type. An instance can only be launched on a host, if no instance with different instances types are running on it, or if the host has no running instances at all.
The following sections describe the available cell filters.
Schedules the instance on a different cell from a set of instances.
To take advantage of this filter, the requester must pass a scheduler hint,
using different_cell
as the key and a list of instance UUIDs as the value.
Filters cells based on properties defined on the instance’s image. This filter works specifying the hypervisor required in the image metadata and the supported hypervisor version in cell capabilities.
Filters target cells. This filter works by specifying a scheduler
hint of target_cell
. The value should be the full cell path.
When resourcing instances, the filter scheduler filters and weights each host in the list of acceptable hosts. Each time the scheduler selects a host, it virtually consumes resources on it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of instances, because weight is computed for each requested instance.
All weights are normalized before being summed up; the host with the largest weight is given the highest priority.
Weighting hosts
If cells are used, cells are weighted by the scheduler in the same manner as hosts.
Hosts and cells are weighted based on the following options in
the /etc/nova/nova.conf
file:
Section | Option | Description |
---|---|---|
[DEFAULT] | ram_weight_multiplier |
By default, the scheduler spreads instances across all hosts evenly.
Set the ram_weight_multiplier option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value. |
[DEFAULT] | scheduler_host_subset_size |
New instances are scheduled on a host that is chosen randomly from a subset of the N best hosts. This property defines the subset size from which a host is chosen. A value of 1 chooses the first host returned by the weighting functions. This value must be at least 1. A value less than 1 is ignored, and 1 is used instead. Use an integer value. |
[DEFAULT] | scheduler_weight_classes |
Defaults to nova.scheduler.weights.all_weighers .
Hosts are then weighted and sorted with the largest weight winning. |
[DEFAULT] | io_ops_weight_multiplier |
Multiplier used for weighing host I/O operations. A negative value means a preference to choose light workload compute hosts. |
[DEFAULT] | soft_affinity_weight_multiplier |
Multiplier used for weighing hosts for group soft-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-anti-affinity. |
[DEFAULT] | soft_anti_affinity_weight_multiplier |
Multiplier used for weighing hosts for group soft-anti-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-affinity. |
[metrics] | weight_multiplier |
Multiplier for weighting meters. Use a floating-point value. |
[metrics] | weight_setting |
Determines how meters are weighted. Use a comma-separated list of
metricName=ratio. For example: name1=1.0, name2=-1.0 results in:
name1.value * 1.0 + name2.value * -1.0 |
[metrics] | required |
Specifies how to treat unavailable meters:
|
[metrics] | weight_of_unavailable |
If required is set to False, and any one of the meters set by
weight_setting is unavailable, the weight_of_unavailable
value is returned to the scheduler. |
For example:
[DEFAULT]
scheduler_host_subset_size = 1
scheduler_weight_classes = nova.scheduler.weights.all_weighers
ram_weight_multiplier = 1.0
io_ops_weight_multiplier = 2.0
soft_affinity_weight_multiplier = 1.0
soft_anti_affinity_weight_multiplier = 1.0
[metrics]
weight_multiplier = 1.0
weight_setting = name1=1.0, name2=-1.0
required = false
weight_of_unavailable = -10000.0
Section | Option | Description |
---|---|---|
[cells] | mute_weight_multiplier |
Multiplier to weight mute children (hosts which have not sent capacity or capacity updates for some time). Use a negative, floating-point value. |
[cells] | offset_weight_multiplier |
Multiplier to weight cells, so you can specify a preferred cell. Use a floating point value. |
[cells] | ram_weight_multiplier |
By default, the scheduler spreads instances across all cells evenly.
Set the ram_weight_multiplier option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value. |
[cells] | scheduler_weight_classes |
Defaults to nova.cells.weights.all_weighers , which maps to all
cell weighers included with Compute. Cells are then weighted and
sorted with the largest weight winning. |
For example:
[cells]
scheduler_weight_classes = nova.cells.weights.all_weighers
mute_weight_multiplier = -10.0
ram_weight_multiplier = 1.0
offset_weight_multiplier = 1.0
As an administrator, you work with the filter scheduler.
However, the Compute service also uses the Chance Scheduler,
nova.scheduler.chance.ChanceScheduler
,
which randomly selects from lists of filtered hosts.
It is possible to schedule VMs using advanced scheduling decisions.
These decisions are made based on enhanced usage statistics encompassing
data like memory cache utilization, memory bandwidth utilization,
or network bandwidth utilization. This is disabled by default.
The administrator can configure how the metrics are weighted in the
configuration file by using the weight_setting
configuration option
in the nova.conf
configuration file.
For example to configure metric1 with ratio1 and metric2 with ratio2:
weight_setting = "metric1=ratio1, metric2=ratio2"
Host aggregates are a mechanism for partitioning hosts in an OpenStack cloud, or a region of an OpenStack cloud, based on arbitrary characteristics. Examples where an administrator may want to do this include where a group of hosts have additional hardware or performance characteristics.
Host aggregates are not explicitly exposed to users. Instead administrators map flavors to host aggregates. Administrators do this by setting metadata on a host aggregate, and matching flavor extra specifications. The scheduler then endeavors to match user requests for instance of the given flavor to a host aggregate with the same key-value pair in its metadata. Compute nodes can be in more than one host aggregate.
Administrators are able to optionally expose a host aggregate as an availability zone. Availability zones are different from host aggregates in that they are explicitly exposed to the user, and hosts can only be in a single availability zone. Administrators can configure a default availability zone where instances will be scheduled when the user fails to specify one.
The nova command-line client supports the following aggregate-related commands.
<name>
, and optionally in availability
zone [availability-zone]
if specified. The command returns the ID of
the newly created aggregate. Hosts can be made available to multiple
host aggregates. Be careful when adding a host to an additional host
aggregate when the host is also in an availability zone. Pay attention
when using the nova aggregate-set-metadata and
nova aggregate-update commands to avoid user confusion when they
boot instances in different availability zones.
An error occurs if you cannot add a particular host to an aggregate zone
for which it is not intended.<id>
.<id>
.<host>
to aggregate with id <id>
.<host>
from the aggregate with id <id>
.<id>
.Note
Only administrators can access these commands. If you try to use
these commands and the user name and tenant that you use to access
the Compute service do not have the admin
role or the
appropriate privileges, these errors occur:
ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b4864)
ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
One common use case for host aggregates is when you want to support scheduling instances to a subset of compute hosts because they have a specific capability. For example, you may want to allow users to request compute hosts that have SSD drives if they need access to faster disk I/O, or access to compute hosts that have GPU cards to take advantage of GPU-accelerated code.
To configure the scheduler to support host aggregates, the
scheduler_default_filters
configuration option must contain the
AggregateInstanceExtraSpecsFilter
in addition to the other
filters used by the scheduler. Add the following line to
/etc/nova/nova.conf
on the host that runs the nova-scheduler
service to enable host aggregates filtering, as well as the other
filters that are typically enabled:
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
This example configures the Compute service to enable users to request
nodes that have solid-state drives (SSDs). You create a fast-io
host aggregate in the nova
availability zone and you add the
ssd=true
key-value pair to the aggregate. Then, you add the
node1
, and node2
compute nodes to it.
$ nova aggregate-create fast-io nova
+----+---------+-------------------+-------+----------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+---------+-------------------+-------+----------+
| 1 | fast-io | nova | | |
+----+---------+-------------------+-------+----------+
$ nova aggregate-set-metadata 1 ssd=true
+----+---------+-------------------+-------+-------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+---------+-------------------+-------+-------------------+
| 1 | fast-io | nova | [] | {u'ssd': u'true'} |
+----+---------+-------------------+-------+-------------------+
$ nova aggregate-add-host 1 node1
+----+---------+-------------------+------------+-------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+---------+-------------------+------------+-------------------+
| 1 | fast-io | nova | [u'node1'] | {u'ssd': u'true'} |
+----+---------+-------------------+------------+-------------------+
$ nova aggregate-add-host 1 node2
+----+---------+-------------------+----------------------+-------------------+
| Id | Name | Availability Zone | Hosts | Metadata |
+----+---------+-------------------+----------------------+-------------------+
| 1 | fast-io | nova | [u'node1', u'node2'] | {u'ssd': u'true'} |
+----+---------+-------------------+----------------------+-------------------+
Use the nova flavor-create command to create the ssd.large
flavor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and four vCPUs.
$ nova flavor-create ssd.large 6 8192 80 4
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6 | ssd.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
Once the flavor is created, specify one or more key-value pairs that
match the key-value pairs on the host aggregates with scope
aggregate_instance_extra_specs
. In this case, that is the
aggregate_instance_extra_specs:ssd=true
key-value pair.
Setting a key-value pair on a flavor is done using the
nova flavor-key command.
$ nova flavor-key ssd.large set aggregate_instance_extra_specs:ssd=true
Once it is set, you should see the extra_specs
property of the
ssd.large
flavor populated with a key of ssd
and a corresponding
value of true
.
$ nova flavor-show ssd.large
+----------------------------+--------------------------------------------------+
| Property | Value |
+----------------------------+--------------------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 80 |
| extra_specs | {u'aggregate_instance_extra_specs:ssd': u'true'} |
| id | 6 |
| name | ssd.large |
| os-flavor-access:is_public | True |
| ram | 8192 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 4 |
+----------------------------+--------------------------------------------------+
Now, when a user requests an instance with the ssd.large
flavor,
the scheduler only considers hosts with the ssd=true
key-value pair.
In this example, these are node1
and node2
.
When using the XenAPI-based hypervisor, the Compute service uses host aggregates to manage XenServer Resource pools, which are used in supporting live migration.
The Compute scheduler configuration options are documented in the tables below.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
aggregate_image_properties_isolation_namespace = None |
(String) Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable. Valid values are strings. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled.
|
aggregate_image_properties_isolation_separator = . |
(String) When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used. It defaults to a period (‘.’). Valid values are strings. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘aggregate_image_properties_isolation’ filter is enabled.
|
baremetal_scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ExactRamFilter, ExactDiskFilter, ExactCoreFilter |
(List) This option specifies the filters used for filtering baremetal hosts. The value should be a list of strings, with each string being the name of a filter class to be used. When used, they will be applied in order, so place your most restrictive filters first to make the filtering process more efficient. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
cpu_allocation_ratio = 0.0 |
(Floating point) This option helps you specify virtual CPU to physical CPU allocation ratio which affects all CPU filters. This configuration specifies ratio for CoreFilter which can be set per compute node. For AggregateCoreFilter, it will fall back to this configuration value if no per-aggregate setting is found. Possible values:
NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 16.0’. |
disk_allocation_ratio = 0.0 |
(Floating point) This option helps you specify virtual disk to physical disk allocation ratio used by the disk_filter.py script to determine if a host has sufficient disk space to fit a requested instance. A ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances. Possible values:
NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.0’. |
disk_weight_multiplier = 1.0 |
(Floating point) Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread. |
io_ops_weight_multiplier = -1.0 |
(Floating point) This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops’ weigher is enabled. Valid values are numeric, either integer or float.
|
isolated_hosts = |
(List) If there is a need to restrict some images to only run on certain designated hosts, list those host names here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled.
|
isolated_images = |
(List) If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled.
|
max_instances_per_host = 50 |
(Integer) If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The num_instances_filter will reject any host that has at least as many instances as this option’s value. Valid values are positive integers; setting it to zero will cause all hosts to be rejected if the num_instances_filter is active. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘num_instances_filter’ filter is enabled.
|
max_io_ops_per_host = 8 |
(Integer) This setting caps the number of instances on a host that can be actively performing IO (in a build, resize, snapshot, migrate, rescue, or unshelve task state) before that host becomes ineligible to build new instances. Valid values are positive integers: 1 or greater. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘io_ops_filter’ filter is enabled.
|
ram_allocation_ratio = 0.0 |
(Floating point) This option helps you specify virtual RAM to physical RAM allocation ratio which affects all RAM filters. This configuration specifies ratio for RamFilter which can be set per compute node. For AggregateRamFilter, it will fall back to this configuration value if no per-aggregate setting found. Possible values:
NOTE: This can be set per-compute, or if set to 0.0, the value set on the scheduler node(s) or compute node(s) will be used and defaulted to 1.5. |
ram_weight_multiplier = 1.0 |
(Floating point) This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘ram’ weigher is enabled. Valid values are numeric, either integer or float.
|
reserved_host_disk_mb = 0 |
(Integer) Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host. Possible values:
|
reserved_host_memory_mb = 512 |
(Integer) Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host. Possible values:
|
reserved_huge_pages = None |
(Unknown) Reserves a number of huge/large memory pages per NUMA host cells Possible values:
|
restrict_isolated_hosts_to_isolated_images = True |
(Boolean) This setting determines if the scheduler’s isolated_hosts filter will allow non-isolated images on a host designated as an isolated host. When set to True (the default), non-isolated images will not be allowed to be built on isolated hosts. When False, non-isolated images can be built on both isolated and non-isolated hosts alike. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect. Also note that this setting only affects scheduling if the ‘IsolatedHostsFilter’ filter is enabled. Even then, this option doesn’t affect the behavior of requests for isolated images, which will always be restricted to isolated hosts.
|
scheduler_available_filters = ['nova.scheduler.filters.all_filters'] |
(Multi-valued) This is an unordered list of the filter classes the Nova scheduler may apply. Only the filters specified in the ‘scheduler_default_filters’ option will be used, but any filter appearing in that option must also be included in this list. By default, this is set to all filters that are included with Nova. If you wish to change this, replace this with a list of strings, where each element is the path to a filter. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter |
(List) This option is the list of filter class names that will be used for filtering hosts. The use of ‘default’ in the name of this option implies that other filters may sometimes be used, but that is not the case. These filters will be applied in the order they are listed, so place your most restrictive filters first to make the filtering process more efficient. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
scheduler_driver = filter_scheduler |
(String) The class of the driver used by the scheduler. This should be chosen from one of the entrypoints under the namespace ‘nova.scheduler.driver’ of file ‘setup.cfg’. If nothing is specified in this option, the ‘filter_scheduler’ is used. This option also supports deprecated full Python path to the class to be used. For example, “nova.scheduler.filter_scheduler.FilterScheduler”. But note: this support will be dropped in the N Release. Other options are:
|
scheduler_driver_task_period = 60 |
(Integer) This value controls how often (in seconds) to run periodic tasks in the scheduler. The specific tasks that are run for each period are determined by the particular scheduler being used. If this is larger than the nova-service ‘service_down_time’ setting, Nova may report the scheduler service as down. This is because the scheduler driver is responsible for sending a heartbeat and it will only do that as often as this option allows. As each scheduler can work a little differently than the others, be sure to test this with your selected scheduler.
|
scheduler_host_manager = host_manager |
(String) The scheduler host manager to use, which manages the in-memory picture of the hosts that the scheduler uses. The option value should be chosen from one of the entrypoints under the namespace ‘nova.scheduler.host_manager’ of file ‘setup.cfg’. For example, ‘host_manager’ is the default setting. Aside from the default, the only other option as of the Mitaka release is ‘ironic_host_manager’, which should be used if you’re using Ironic to provision bare-metal instances.
|
scheduler_host_subset_size = 1 |
(Integer) New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Valid values are 1 or greater. Any value less than one will be treated as 1. Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
scheduler_instance_sync_interval = 120 |
(Integer) Waiting time interval (seconds) between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova. If the CONF option scheduler_tracks_instance_changes is False, changing this option will have no effect. |
scheduler_json_config_location = |
(String) The absolute path to the scheduler configuration JSON file, if any. This file location is monitored by the scheduler for changes and reloads it if needed. It is converted from JSON to a Python data structure, and passed into the filtering and weighing functions of the scheduler, which can use it for dynamic configuration.
|
scheduler_manager = nova.scheduler.manager.SchedulerManager |
(String) DEPRECATED: Full class name for the Manager for scheduler |
scheduler_max_attempts = 3 |
(Integer) This is the maximum number of attempts that will be made to schedule an instance before it is assumed that the failures aren’t due to normal occasional race conflicts, but rather some other problem. When this is reached a MaxRetriesExceeded exception is raised, and the instance is set to an error state. Valid values are positive integers (1 or greater).
|
scheduler_topic = scheduler |
(String) This is the message queue topic that the scheduler ‘listens’ on. It is used when the scheduler service is started up to configure the queue, and whenever an RPC call to the scheduler is made. There is almost never any reason to ever change this value.
|
scheduler_tracks_instance_changes = True |
(Boolean) The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host. If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
scheduler_use_baremetal_filters = False |
(Boolean) Set this to True to tell the nova scheduler that it should use the filters specified in the ‘baremetal_scheduler_default_filters’ option. If you are not scheduling baremetal nodes, leave this at the default setting of False. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
scheduler_weight_classes = nova.scheduler.weights.all_weighers |
(List) This is a list of weigher class names. Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the previous weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is ‘scheduler_host_subset_size’. By default, this is set to all weighers that are included with Nova. If you wish to change this, replace this with a list of strings, where each element is the path to a weigher. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
soft_affinity_weight_multiplier = 1.0 |
(Floating point) Multiplier used for weighing hosts for group soft-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-anti-affinity. |
soft_anti_affinity_weight_multiplier = 1.0 |
(Floating point) Multiplier used for weighing hosts for group soft-anti-affinity. Only a positive value is meaningful. Negative means that the behavior will change to the opposite, which is soft-affinity. |
[cells] | |
ram_weight_multiplier = 10.0 |
(Floating point) Ram weight multiplier Multiplier used for weighing ram. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the cell. Possible values:
|
scheduler_filter_classes = nova.cells.filters.all_filters |
(List) Scheduler filter classes Filter classes the cells scheduler should use. An entry of “nova.cells.filters.all_filters” maps to all cells filters included with nova. As of the Mitaka release the following filter classes are available: Different cell filter: A scheduler hint of ‘different_cell’ with a value of a full cell name may be specified to route a build away from a particular cell. Image properties filter: Image metadata named ‘hypervisor_version_requires’ with a version specification may be specified to ensure the build goes to a cell which has hypervisors of the required version. If either the version requirement on the image or the hypervisor capability of the cell is not present, this filter returns without filtering out the cells. Target cell filter: A scheduler hint of ‘target_cell’ with a value of a full cell name may be specified to route a build to a particular cell. No error handling is done as there’s no way to know whether the full path is a valid. As an admin user, you can also add a filter that directs builds to a particular cell. |
scheduler_retries = 10 |
(Integer) Scheduler retries How many retries when no cells are available. Specifies how many times the scheduler tries to launch a new instance when no cells are available. Possible values:
Related options:
|
scheduler_retry_delay = 2 |
(Integer) Scheduler retry delay Specifies the delay (in seconds) between scheduling retries when no cell can be found to place the new instance on. When the instance could not be scheduled to a cell after Possible values:
Related options:
|
scheduler_weight_classes = nova.cells.weights.all_weighers |
(List) Scheduler weight classes Weigher classes the cells scheduler should use. An entry of “nova.cells.weights.all_weighers” maps to all cell weighers included with nova. As of the Mitaka release the following weight classes are available: mute_child: Downgrades the likelihood of child cells being chosen for scheduling requests, which haven’t sent capacity or capability updates in a while. Options include mute_weight_multiplier (multiplier for mute children; value should be negative). ram_by_instance_type: Select cells with the most RAM capacity for the instance type being requested. Because higher weights win, Compute returns the number of available units for the instance type requested. The ram_weight_multiplier option defaults to 10.0 that adds to the weight by a factor of 10. Use a negative number to stack VMs on one host instead of spreading out new VMs to more hosts in the cell. weight_offset: Allows modifying the database to weight a particular cell. The highest weight will be the first cell to be scheduled for launching an instance. When the weight_offset of a cell is set to 0, it is unlikely to be picked but it could be picked if other cells have a lower weight, like if they’re full. And when the weight_offset is set to a very high value (for example, ‘999999999999999’), it is likely to be picked if another cell do not have a higher weight. |
[metrics] | |
required = True |
(Boolean) This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing. When this option is False, any metric being unavailable for a host will set the host weight to ‘weight_of_unavailable’. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
weight_multiplier = 1.0 |
(Floating point) When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows:
Valid values are numeric, either integer or float. This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
weight_of_unavailable = -10000.0 |
(Floating point) When any of the following conditions are met, this value will be used in place of any actual metric value:
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
weight_setting = |
(List) This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more ‘name=ratio’ pairs, separated by commas, where ‘name’ is the name of the metric to be weighed, and ‘ratio’ is the relative weight for that metric. Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the ‘weight_of_unavailable’ option. As an example, let’s consider the case where this option is set to:
The final weight will be:
This option is only used by the FilterScheduler and its subclasses; if you use a different scheduler, this option has no effect.
|
Cells
functionality enables you to scale an OpenStack Compute
cloud in a more distributed fashion without having to use complicated
technologies like database and message queue clustering.
It supports very large deployments.
When this functionality is enabled, the hosts in an OpenStack Compute
cloud are partitioned into groups called cells.
Cells are configured as a tree. The top-level cell should have a host
that runs a nova-api
service, but no nova-compute
services.
Each child cell should run all of the typical nova-*
services in
a regular Compute cloud except for nova-api
. You can think of
cells as a normal Compute deployment in that each cell has its own
database server and message queue broker.
The nova-cells
service handles communication between cells and
selects cells for new instances. This service is required for every
cell. Communication between cells is pluggable, and currently the
only option is communication through RPC.
Cells scheduling is separate from host scheduling.
nova-cells
first picks a cell. Once a cell is selected and the
new build request reaches its nova-cells
service, it is sent
over to the host scheduler in that cell and the build proceeds as
it would have without cells.
Warning
Cell functionality is currently considered experimental.
Cells are disabled by default. All cell-related configuration
options appear in the [cells]
section in nova.conf
.
The following cell-related options are currently supported:
True
to turn on cell functionality. Default is false
.key=value
pairs defining capabilities of the current
cell. Values include hypervisor=xenserver;kvm,os=linux;windows
.nova.cells.filters.all_filters
to map to
all cells filters included with Compute.nova.cells.weights.all_weighers
to map to all cells
weight algorithms included with Compute.The cell type must be changed in the API cell so that requests can
be proxied through nova-cells down to the correct cell properly.
Edit the nova.conf
file in the API cell, and specify api
in the cell_type
key:
[DEFAULT]
compute_api_class=nova.compute.cells_api.ComputeCellsAPI
...
[cells]
cell_type= api
Edit the nova.conf
file in the child cells, and specify
compute
in the cell_type
key:
[DEFAULT]
# Disable quota checking in child cells. Let API cell do it exclusively.
quota_driver=nova.quota.NoopQuotaDriver
[cells]
cell_type = compute
Before bringing the services online, the database in each cell
needs to be configured with information about related cells.
In particular, the API cell needs to know about its immediate
children, and the child cells must know about their immediate agents.
The information needed is the RabbitMQ
server credentials
for the particular cell.
Use the nova-manage cell create command to add this information to the database in each cell:
# nova-manage cell create -h
usage: nova-manage cell create [-h] [--name <name>]
[--cell_type <parent|api|child|compute>]
[--username <username>] [--password <password>]
[--broker_hosts <broker_hosts>]
[--hostname <hostname>] [--port <number>]
[--virtual_host <virtual_host>]
[--woffset <float>] [--wscale <float>]
optional arguments:
-h, --help show this help message and exit
--name <name> Name for the new cell
--cell_type <parent|api|child|compute>
Whether the cell is parent/api or child/compute
--username <username>
Username for the message broker in this cell
--password <password>
Password for the message broker in this cell
--broker_hosts <broker_hosts>
Comma separated list of message brokers in this cell.
Each Broker is specified as hostname:port with both
mandatory. This option overrides the --hostname and
--port options (if provided).
--hostname <hostname>
Address of the message broker in this cell
--port <number> Port number of the message broker in this cell
--virtual_host <virtual_host>
The virtual host of the message broker in this cell
--woffset <float>
--wscale <float>
As an example, assume an API cell named api
and a child
cell named cell1
.
Within the api
cell, specify the following RabbitMQ
server information:
rabbit_host=10.0.0.10
rabbit_port=5672
rabbit_username=api_user
rabbit_password=api_passwd
rabbit_virtual_host=api_vhost
Within the cell1
child cell, specify the following
RabbitMQ
server information:
rabbit_host=10.0.1.10
rabbit_port=5673
rabbit_username=cell1_user
rabbit_password=cell1_passwd
rabbit_virtual_host=cell1_vhost
You can run this in the API cell as root:
# nova-manage cell create --name cell1 --cell_type child \
--username cell1_user --password cell1_passwd --hostname 10.0.1.10 \
--port 5673 --virtual_host cell1_vhost --woffset 1.0 --wscale 1.0
Repeat the previous steps for all child cells.
In the child cell, run the following, as root:
# nova-manage cell create --name api --cell_type parent \
--username api_user --password api_passwd --hostname 10.0.0.10 \
--port 5672 --virtual_host api_vhost --woffset 1.0 --wscale 1.0
To customize the Compute cells, use the configuration option settings documented in the table Description of cell configuration options.
To determine the best cell to use to launch a new instance,
Compute uses a set of filters and weights defined in the
/etc/nova/nova.conf
file. The following options are
available to prioritize cells for scheduling:
nova.cells.filters.all_filters
is specified, which maps to all cells filters included with Compute
(see the section called Filters).List of weight classes.
By default nova.cells.weights.all_weighers
is specified,
which maps to all cell weight algorithms included with Compute.
The following modules are available:
mute_child
. Downgrades the likelihood of child cells being chosen
for scheduling requests, which haven’t sent capacity or capability
updates in a while. Options include mute_weight_multiplier
(multiplier for mute children; value should be negative).ram_by_instance_type
. Select cells with the most RAM capacity
for the instance type being requested. Because higher weights win,
Compute returns the number of available units for the instance type
requested. The ram_weight_multiplier
option defaults to 10.0
that adds to the weight by a factor of 10.
Use a negative number to stack VMs on one host instead of spreading
out new VMs to more hosts in the cell.weight_offset
. Allows modifying the database to weight a
particular cell. You can use this when you want to disable a
cell (for example, ‘0’), or to set a default cell by making its
weight_offset
very high (for example, ‘999999999999999’).
The highest weight will be the first cell to be scheduled for
launching an instance.Additionally, the following options are available for the cell scheduler:
As an admin user, you can also add a filter that directs builds to
a particular cell. The policy.json
file must have a line with
"cells_scheduler_filter:TargetCellFilter" : "is_admin:True"
to let an admin user specify a scheduler hint to direct a build to
a particular cell.
Cells store all inter-cell communication data, including user names
and passwords, in the database. Because the cells data is not updated
very frequently, use the [cells]cells_config
option to specify
a JSON file to store cells data. With this configuration,
the database is no longer consulted when reloading the cells data.
The file must have columns present in the Cell model (excluding
common database fields and the id
column). You must specify the
queue connection information through a transport_url
field,
instead of username
, password
, and so on.
The transport_url
has the following form:
rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
The scheme can only be rabbit
.
The following sample shows this optional configuration:
{
"parent": {
"name": "parent",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": true
},
"cell1": {
"name": "cell1",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit1.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
},
"cell2": {
"name": "cell2",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit2.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
}
}
The nova-conductor
service enables OpenStack to function
without compute nodes accessing the database.
Conceptually, it implements a new layer on top of nova-compute
.
It should not be deployed on compute nodes, or else the security
benefits of removing database access from nova-compute
are negated.
Just like other nova services such as nova-api
or
nova-scheduler
, it can be scaled horizontally.
You can run multiple instances of nova-conductor
on
different machines as needed for scaling purposes.
The methods exposed by nova-conductor
are relatively simple
methods used by nova-compute
to offload its database operations.
Places where nova-compute
previously performed database
access are now talking to nova-conductor
.
However, we have plans in the medium to long term to move more and more of
what is currently in nova-compute
up to the nova-conductor
layer.
The Compute service will start to look like a less intelligent
slave service to nova-conductor
.
The conductor service will implement long running complex operations,
ensuring forward progress and graceful error handling.
This will be especially beneficial for operations that cross multiple
compute nodes, such as migrations or resizes.
To customize the Conductor
, use the configuration option settings
documented in the table Description of conductor configuration options.
The corresponding log file of each Compute service is stored in the
/var/log/nova/
directory of the host on which each service runs.
Log file | Service name (CentOS/Fedora/openSUSE/Red Hat Enterprise Linux/SUSE Linux Enterprise) | Service name (Ubuntu/Debian) |
---|---|---|
nova-api.log |
openstack-nova-api |
nova-api |
nova-cert.log [1] |
openstack-nova-cert |
nova-cert |
nova-compute.log |
openstack-nova-compute |
nova-compute |
nova-conductor.log |
openstack-nova-conductor |
nova-conductor |
nova-consoleauth.log |
openstack-nova-consoleauth |
nova-consoleauth |
nova-network.log [2] |
openstack-nova-network |
nova-network |
nova-manage.log |
nova-manage |
nova-manage |
nova-scheduler.log |
openstack-nova-scheduler |
nova-scheduler |
Footnotes
[1] | The X509 certificate service (openstack-nova-cert /nova-cert )
is only required by the EC2 API to the Compute service. |
[2] | The nova network service (openstack-nova-network /
nova-network ) only runs in deployments that are not configured
to use the Networking service (neutron ). |
The following sections describe the configuration options in the
nova.conf
file. You must copy the nova.conf
file to each
compute node. The sample nova.conf
files show examples of
specific configurations.
This example nova.conf
file configures a small private cloud
with cloud controller services, database server, and messaging
server on the same server. In this case, CONTROLLER_IP
represents
the IP address of a central server, BRIDGE_INTERFACE
represents
the bridge such as br100, the NETWORK_INTERFACE
represents an
interface to your VLAN setup, and passwords are represented as
DB_PASSWORD_COMPUTE
for your Compute (nova) database password,
and RABBIT PASSWORD
represents the password to your message
queue installation.
[DEFAULT]
# LOGS/STATE
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf
# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
# VOLUMES
# configured in cinder.conf
# COMPUTE
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True
# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.206.130
s3_host=192.168.206.130
# RABBITMQ
rabbit_host=192.168.206.130
# GLANCE
image_service=nova.image.glance.GlanceImageService
# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0
# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130
# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova
# GLANCE
[glance]
api_servers=192.168.206.130:9292
# DATABASE
[database]
connection=mysql+pymysql://nova:yourpassword@192.168.206.130/nova
# LIBVIRT
[libvirt]
virt_type=qemu
This example nova.conf
file, from an internal Rackspace test
system, is used for demonstrations.
[DEFAULT]
# LOGS/STATE
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf
# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
# VOLUMES
# configured in cinder.conf
# COMPUTE
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True
# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.206.130
s3_host=192.168.206.130
# RABBITMQ
rabbit_host=192.168.206.130
# GLANCE
image_service=nova.image.glance.GlanceImageService
# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0
# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130
# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova
# GLANCE
[glance]
api_servers=192.168.206.130:9292
# DATABASE
[database]
connection=mysql+pymysql://nova:yourpassword@192.168.206.130/nova
# LIBVIRT
[libvirt]
virt_type=qemu
This example nova.conf
file is from an internal Rackspace test system.
verbose
nodaemon
network_manager=nova.network.manager.FlatManager
image_service=nova.image.glance.GlanceImageService
flat_network_bridge=xenbr0
compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://<XenServer IP>
xenapi_connection_username=root
xenapi_connection_password=supersecret
xenapi_image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore
rescue_timeout=86400
use_ipv6=true
# To enable flat_injected, currently only works on Debian-based systems
flat_injected=true
ipv6_backend=account_identifier
ca_path=./nova/CA
# Add the following to your conf file if you're running on Ubuntu Maverick
xenapi_remap_vbd_dev=true
[database]
connection=mysql+pymysql://root:<password>@127.0.0.1/nova
Files in this section can be found in /etc/nova
.
The Compute service stores its API configuration settings in the
api-paste.ini
file.
############
# Metadata #
############
[composite:metadata]
use = egg:Paste#urlmap
/: meta
[pipeline:meta]
pipeline = cors metaapp
[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory
#############
# OpenStack #
#############
[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
# v21 is an exactly feature match for v2, except it has more stringent
# input validation on the wsgi surface (prevents fuzzing early on the
# API). It also provides new features via API microversions which are
# opt into for clients. Unaware clients will receive the same frozen
# v2 API feature set, but with some relaxed validation
/v2: openstack_compute_api_v21_legacy_v2_compatible
/v2.1: openstack_compute_api_v21
[composite:openstack_compute_api_v21]
use = call:nova.api.auth:pipeline_factory_v21
noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit noauth2 osapi_compute_app_v21
keystone = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v21
[composite:openstack_compute_api_v21_legacy_v2_compatible]
use = call:nova.api.auth:pipeline_factory_v21
noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit noauth2 legacy_v2_compatible osapi_compute_app_v21
keystone = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit authtoken keystonecontext legacy_v2_compatible osapi_compute_app_v21
[filter:request_id]
paste.filter_factory = oslo_middleware:RequestId.factory
[filter:compute_req_id]
paste.filter_factory = nova.api.compute_req_id:ComputeReqIdMiddleware.factory
[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory
[filter:noauth2]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
[filter:sizelimit]
paste.filter_factory = oslo_middleware:RequestBodySizeLimiter.factory
[filter:http_proxy_to_wsgi]
paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
[filter:legacy_v2_compatible]
paste.filter_factory = nova.api.openstack:LegacyV2CompatibleWrapper.factory
[app:osapi_compute_app_v21]
paste.app_factory = nova.api.openstack.compute:APIRouterV21.factory
[pipeline:oscomputeversions]
pipeline = faultwrap http_proxy_to_wsgi oscomputeversionapp
[app:oscomputeversionapp]
paste.app_factory = nova.api.openstack.compute.versions:Versions.factory
##########
# Shared #
##########
[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = nova
[filter:keystonecontext]
paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
The policy.yaml
file defines additional access controls
that apply to the Compute service.
#
"os_compute_api:os-admin-actions:discoverable": "@"
#
"os_compute_api:os-admin-actions:reset_state": "rule:admin_api"
#
"os_compute_api:os-admin-actions:inject_network_info": "rule:admin_api"
#
"os_compute_api:os-admin-actions": "rule:admin_api"
#
"os_compute_api:os-admin-actions:reset_network": "rule:admin_api"
#
"os_compute_api:os-admin-password:discoverable": "@"
#
"os_compute_api:os-admin-password": "rule:admin_or_owner"
#
"os_compute_api:os-agents": "rule:admin_api"
#
"os_compute_api:os-agents:discoverable": "@"
#
"os_compute_api:os-aggregates:set_metadata": "rule:admin_api"
#
"os_compute_api:os-aggregates:add_host": "rule:admin_api"
#
"os_compute_api:os-aggregates:discoverable": "@"
#
"os_compute_api:os-aggregates:create": "rule:admin_api"
#
"os_compute_api:os-aggregates:remove_host": "rule:admin_api"
#
"os_compute_api:os-aggregates:update": "rule:admin_api"
#
"os_compute_api:os-aggregates:index": "rule:admin_api"
#
"os_compute_api:os-aggregates:delete": "rule:admin_api"
#
"os_compute_api:os-aggregates:show": "rule:admin_api"
#
"os_compute_api:os-assisted-volume-snapshots:create": "rule:admin_api"
#
"os_compute_api:os-assisted-volume-snapshots:delete": "rule:admin_api"
#
"os_compute_api:os-assisted-volume-snapshots:discoverable": "@"
#
"os_compute_api:os-attach-interfaces": "rule:admin_or_owner"
#
"os_compute_api:os-attach-interfaces:discoverable": "@"
# Controls who can attach an interface to an instance
"os_compute_api:os-attach-interfaces:create": "rule:admin_or_owner"
# Controls who can detach an interface from an instance
"os_compute_api:os-attach-interfaces:delete": "rule:admin_or_owner"
#
"os_compute_api:os-availability-zone:list": "rule:admin_or_owner"
#
"os_compute_api:os-availability-zone:discoverable": "@"
#
"os_compute_api:os-availability-zone:detail": "rule:admin_api"
#
"os_compute_api:os-baremetal-nodes:discoverable": "@"
#
"os_compute_api:os-baremetal-nodes": "rule:admin_api"
#
"context_is_admin": "role:admin"
#
"admin_or_owner": "is_admin:True or project_id:%(project_id)s"
#
"admin_api": "is_admin:True"
#
"network:attach_external_network": "is_admin:True"
#
"os_compute_api:os-block-device-mapping:discoverable": "@"
#
"os_compute_api:os-block-device-mapping-v1:discoverable": "@"
#
"os_compute_api:os-cells:discoverable": "@"
#
"os_compute_api:os-cells:update": "rule:admin_api"
#
"os_compute_api:os-cells:create": "rule:admin_api"
#
"os_compute_api:os-cells": "rule:admin_api"
#
"os_compute_api:os-cells:sync_instances": "rule:admin_api"
#
"os_compute_api:os-cells:delete": "rule:admin_api"
#
"cells_scheduler_filter:DifferentCellFilter": "is_admin:True"
#
"cells_scheduler_filter:TargetCellFilter": "is_admin:True"
#
"os_compute_api:os-certificates:discoverable": "@"
#
"os_compute_api:os-certificates:create": "rule:admin_or_owner"
#
"os_compute_api:os-certificates:show": "rule:admin_or_owner"
#
"os_compute_api:os-cloudpipe": "rule:admin_api"
#
"os_compute_api:os-cloudpipe:discoverable": "@"
#
"os_compute_api:os-config-drive:discoverable": "@"
#
"os_compute_api:os-config-drive": "rule:admin_or_owner"
#
"os_compute_api:os-console-auth-tokens:discoverable": "@"
#
"os_compute_api:os-console-auth-tokens": "rule:admin_api"
#
"os_compute_api:os-console-output:discoverable": "@"
#
"os_compute_api:os-console-output": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:create": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:show": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:delete": "rule:admin_or_owner"
#
"os_compute_api:os-consoles:discoverable": "@"
#
"os_compute_api:os-consoles:index": "rule:admin_or_owner"
#
"os_compute_api:os-create-backup:discoverable": "@"
#
"os_compute_api:os-create-backup": "rule:admin_or_owner"
#
"os_compute_api:os-deferred-delete:discoverable": "@"
#
"os_compute_api:os-deferred-delete": "rule:admin_or_owner"
#
"os_compute_api:os-evacuate:discoverable": "@"
#
"os_compute_api:os-evacuate": "rule:admin_api"
#
"os_compute_api:os-extended-availability-zone": "rule:admin_or_owner"
#
"os_compute_api:os-extended-availability-zone:discoverable": "@"
#
"os_compute_api:os-extended-server-attributes": "rule:admin_api"
#
"os_compute_api:os-extended-server-attributes:discoverable": "@"
#
"os_compute_api:os-extended-status:discoverable": "@"
#
"os_compute_api:os-extended-status": "rule:admin_or_owner"
#
"os_compute_api:os-extended-volumes": "rule:admin_or_owner"
#
"os_compute_api:os-extended-volumes:discoverable": "@"
#
"os_compute_api:extension_info:discoverable": "@"
#
"os_compute_api:extensions": "rule:admin_or_owner"
#
"os_compute_api:extensions:discoverable": "@"
#
"os_compute_api:os-fixed-ips:discoverable": "@"
#
"os_compute_api:os-fixed-ips": "rule:admin_api"
#
"os_compute_api:os-flavor-access:add_tenant_access": "rule:admin_api"
#
"os_compute_api:os-flavor-access:discoverable": "@"
#
"os_compute_api:os-flavor-access:remove_tenant_access": "rule:admin_api"
#
"os_compute_api:os-flavor-access": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-extra-specs:show": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-extra-specs:create": "rule:admin_api"
#
"os_compute_api:os-flavor-extra-specs:discoverable": "@"
#
"os_compute_api:os-flavor-extra-specs:update": "rule:admin_api"
#
"os_compute_api:os-flavor-extra-specs:delete": "rule:admin_api"
#
"os_compute_api:os-flavor-extra-specs:index": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-manage": "rule:admin_api"
#
"os_compute_api:os-flavor-manage:discoverable": "@"
#
"os_compute_api:os-flavor-rxtx": "rule:admin_or_owner"
#
"os_compute_api:os-flavor-rxtx:discoverable": "@"
#
"os_compute_api:flavors:discoverable": "@"
#
"os_compute_api:flavors": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ip-dns": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ip-dns:domain:update": "rule:admin_api"
#
"os_compute_api:os-floating-ip-dns:discoverable": "@"
#
"os_compute_api:os-floating-ip-dns:domain:delete": "rule:admin_api"
#
"os_compute_api:os-floating-ip-pools:discoverable": "@"
#
"os_compute_api:os-floating-ip-pools": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ips": "rule:admin_or_owner"
#
"os_compute_api:os-floating-ips:discoverable": "@"
#
"os_compute_api:os-floating-ips-bulk:discoverable": "@"
#
"os_compute_api:os-floating-ips-bulk": "rule:admin_api"
#
"os_compute_api:os-fping:all_tenants": "rule:admin_api"
#
"os_compute_api:os-fping:discoverable": "@"
#
"os_compute_api:os-fping": "rule:admin_or_owner"
#
"os_compute_api:os-hide-server-addresses:discoverable": "@"
#
"os_compute_api:os-hide-server-addresses": "is_admin:False"
#
"os_compute_api:os-hosts:discoverable": "@"
#
"os_compute_api:os-hosts": "rule:admin_api"
#
"os_compute_api:os-hypervisors:discoverable": "@"
#
"os_compute_api:os-hypervisors": "rule:admin_api"
#
"os_compute_api:image-metadata:discoverable": "@"
#
"os_compute_api:image-size:discoverable": "@"
#
"os_compute_api:image-size": "rule:admin_or_owner"
#
"os_compute_api:images:discoverable": "@"
#
"os_compute_api:os-instance-actions:events": "rule:admin_api"
#
"os_compute_api:os-instance-actions": "rule:admin_or_owner"
#
"os_compute_api:os-instance-actions:discoverable": "@"
#
"os_compute_api:os-instance-usage-audit-log": "rule:admin_api"
#
"os_compute_api:os-instance-usage-audit-log:discoverable": "@"
#
"os_compute_api:ips:discoverable": "@"
#
"os_compute_api:ips:show": "rule:admin_or_owner"
#
"os_compute_api:ips:index": "rule:admin_or_owner"
#
"os_compute_api:os-keypairs:discoverable": "@"
#
"os_compute_api:os-keypairs:index": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs:create": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs:delete": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs:show": "rule:admin_api or user_id:%(user_id)s"
#
"os_compute_api:os-keypairs": "rule:admin_or_owner"
#
"os_compute_api:limits:discoverable": "@"
#
"os_compute_api:limits": "rule:admin_or_owner"
#
"os_compute_api:os-lock-server:discoverable": "@"
#
"os_compute_api:os-lock-server:lock": "rule:admin_or_owner"
#
"os_compute_api:os-lock-server:unlock:unlock_override": "rule:admin_api"
#
"os_compute_api:os-lock-server:unlock": "rule:admin_or_owner"
#
"os_compute_api:os-migrate-server:migrate": "rule:admin_api"
#
"os_compute_api:os-migrate-server:discoverable": "@"
#
"os_compute_api:os-migrate-server:migrate_live": "rule:admin_api"
#
"os_compute_api:os-migrations:index": "rule:admin_api"
#
"os_compute_api:os-migrations:discoverable": "@"
#
"os_compute_api:os-multinic": "rule:admin_or_owner"
#
"os_compute_api:os-multinic:discoverable": "@"
#
"os_compute_api:os-multiple-create:discoverable": "@"
#
"os_compute_api:os-networks:discoverable": "@"
#
"os_compute_api:os-networks": "rule:admin_api"
#
"os_compute_api:os-networks:view": "rule:admin_or_owner"
#
"os_compute_api:os-networks-associate": "rule:admin_api"
#
"os_compute_api:os-networks-associate:discoverable": "@"
#
"os_compute_api:os-pause-server:unpause": "rule:admin_or_owner"
#
"os_compute_api:os-pause-server:discoverable": "@"
#
"os_compute_api:os-pause-server:pause": "rule:admin_or_owner"
#
"os_compute_api:os-pci:index": "rule:admin_api"
#
"os_compute_api:os-pci:detail": "rule:admin_api"
#
"os_compute_api:os-pci:pci_servers": "rule:admin_or_owner"
#
"os_compute_api:os-pci:show": "rule:admin_api"
#
"os_compute_api:os-pci:discoverable": "@"
#
"os_compute_api:os-quota-class-sets:show": "is_admin:True or quota_class:%(quota_class)s"
#
"os_compute_api:os-quota-class-sets:discoverable": "@"
#
"os_compute_api:os-quota-class-sets:update": "rule:admin_api"
#
"os_compute_api:os-quota-sets:update": "rule:admin_api"
#
"os_compute_api:os-quota-sets:defaults": "@"
#
"os_compute_api:os-quota-sets:show": "rule:admin_or_owner"
#
"os_compute_api:os-quota-sets:delete": "rule:admin_api"
#
"os_compute_api:os-quota-sets:discoverable": "@"
#
"os_compute_api:os-quota-sets:detail": "rule:admin_api"
#
"os_compute_api:os-remote-consoles": "rule:admin_or_owner"
#
"os_compute_api:os-remote-consoles:discoverable": "@"
#
"os_compute_api:os-rescue:discoverable": "@"
#
"os_compute_api:os-rescue": "rule:admin_or_owner"
#
"os_compute_api:os-scheduler-hints:discoverable": "@"
#
"os_compute_api:os-security-group-default-rules:discoverable": "@"
#
"os_compute_api:os-security-group-default-rules": "rule:admin_api"
#
"os_compute_api:os-security-groups": "rule:admin_or_owner"
#
"os_compute_api:os-security-groups:discoverable": "@"
#
"os_compute_api:os-server-diagnostics": "rule:admin_api"
#
"os_compute_api:os-server-diagnostics:discoverable": "@"
#
"os_compute_api:os-server-external-events:create": "rule:admin_api"
#
"os_compute_api:os-server-external-events:discoverable": "@"
#
"os_compute_api:os-server-groups:discoverable": "@"
#
"os_compute_api:os-server-groups": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:index": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:show": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:create": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:discoverable": "@"
#
"os_compute_api:server-metadata:update_all": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:delete": "rule:admin_or_owner"
#
"os_compute_api:server-metadata:update": "rule:admin_or_owner"
#
"os_compute_api:os-server-password": "rule:admin_or_owner"
#
"os_compute_api:os-server-password:discoverable": "@"
#
"os_compute_api:os-server-tags:delete_all": "@"
#
"os_compute_api:os-server-tags:index": "@"
#
"os_compute_api:os-server-tags:update_all": "@"
#
"os_compute_api:os-server-tags:delete": "@"
#
"os_compute_api:os-server-tags:update": "@"
#
"os_compute_api:os-server-tags:show": "@"
#
"os_compute_api:os-server-tags:discoverable": "@"
#
"os_compute_api:os-server-usage": "rule:admin_or_owner"
#
"os_compute_api:os-server-usage:discoverable": "@"
#
"os_compute_api:servers:index": "rule:admin_or_owner"
#
"os_compute_api:servers:detail": "rule:admin_or_owner"
#
"os_compute_api:servers:detail:get_all_tenants": "rule:admin_api"
#
"os_compute_api:servers:index:get_all_tenants": "rule:admin_api"
#
"os_compute_api:servers:show": "rule:admin_or_owner"
#
"os_compute_api:servers:show:host_status": "rule:admin_api"
#
"os_compute_api:servers:create": "rule:admin_or_owner"
#
"os_compute_api:servers:create:forced_host": "rule:admin_api"
#
"os_compute_api:servers:create:attach_volume": "rule:admin_or_owner"
#
"os_compute_api:servers:create:attach_network": "rule:admin_or_owner"
#
"os_compute_api:servers:delete": "rule:admin_or_owner"
#
"os_compute_api:servers:update": "rule:admin_or_owner"
#
"os_compute_api:servers:confirm_resize": "rule:admin_or_owner"
#
"os_compute_api:servers:revert_resize": "rule:admin_or_owner"
#
"os_compute_api:servers:reboot": "rule:admin_or_owner"
#
"os_compute_api:servers:resize": "rule:admin_or_owner"
#
"os_compute_api:servers:rebuild": "rule:admin_or_owner"
#
"os_compute_api:servers:create_image": "rule:admin_or_owner"
#
"os_compute_api:servers:create_image:allow_volume_backed": "rule:admin_or_owner"
#
"os_compute_api:servers:start": "rule:admin_or_owner"
#
"os_compute_api:servers:stop": "rule:admin_or_owner"
#
"os_compute_api:servers:trigger_crash_dump": "rule:admin_or_owner"
#
"os_compute_api:servers:discoverable": "@"
#
"os_compute_api:servers:migrations:show": "rule:admin_api"
#
"os_compute_api:servers:migrations:force_complete": "rule:admin_api"
#
"os_compute_api:servers:migrations:delete": "rule:admin_api"
#
"os_compute_api:servers:migrations:index": "rule:admin_api"
#
"os_compute_api:server-migrations:discoverable": "@"
#
"os_compute_api:os-services": "rule:admin_api"
#
"os_compute_api:os-services:discoverable": "@"
#
"os_compute_api:os-shelve:shelve": "rule:admin_or_owner"
#
"os_compute_api:os-shelve:unshelve": "rule:admin_or_owner"
#
"os_compute_api:os-shelve:shelve_offload": "rule:admin_api"
#
"os_compute_api:os-shelve:discoverable": "@"
#
"os_compute_api:os-simple-tenant-usage:show": "rule:admin_or_owner"
#
"os_compute_api:os-simple-tenant-usage:list": "rule:admin_api"
#
"os_compute_api:os-simple-tenant-usage:discoverable": "@"
#
"os_compute_api:os-suspend-server:resume": "rule:admin_or_owner"
#
"os_compute_api:os-suspend-server:suspend": "rule:admin_or_owner"
#
"os_compute_api:os-suspend-server:discoverable": "@"
#
"os_compute_api:os-tenant-networks": "rule:admin_or_owner"
#
"os_compute_api:os-tenant-networks:discoverable": "@"
#
"os_compute_api:os-used-limits:discoverable": "@"
#
"os_compute_api:os-used-limits": "rule:admin_api"
#
"os_compute_api:os-user-data:discoverable": "@"
#
"os_compute_api:versions:discoverable": "@"
#
"os_compute_api:os-virtual-interfaces:discoverable": "@"
#
"os_compute_api:os-virtual-interfaces": "rule:admin_or_owner"
#
"os_compute_api:os-volumes:discoverable": "@"
#
"os_compute_api:os-volumes": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:index": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:create": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:show": "rule:admin_or_owner"
#
"os_compute_api:os-volumes-attachments:discoverable": "@"
#
"os_compute_api:os-volumes-attachments:update": "rule:admin_api"
#
"os_compute_api:os-volumes-attachments:delete": "rule:admin_or_owner"
The rootwrap.conf
file defines configuration values
used by the rootwrap script when the Compute service needs
to escalate its privileges to those of the root user.
It is also possible to disable the root wrapper, and default
to sudo only. Configure the disable_rootwrap
option in the
[workaround]
section of the nova.conf
configuration file.
# Configuration for nova-rootwrap
# This file should be owned by (and only-writeable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
# List of directories to search executables in, in case filters do not
# explicitly specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/sbin,/usr/local/bin
# Enable logging to syslog
# Default value is False
use_syslog=False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The following options are available to configure and customize the behavior of your Dashboard installation.
The following options are included in the HORIZON_CONFIG
dictionary.
Note
Dashboards are automatically discovered in two ways:
openstack_dashboard/local/enabled
directory. This is the default
way.INSTALLED_APPS
and importing
any files that have the name dashboard.py
and include code to
register themselves as a Dashboard.Warning
In Dashboard configuration, we suggest that you do not use the
dashboards
and default_dashboard
settings. If you plan on having
more than one dashboard, please specify their order using the
Pluggable settings.
Configuration option = Default value | Description |
---|---|
ajax_queue_limit = 10 |
The maximum number of simultaneous AJAX connections the dashboard may try to make. |
ajax_poll_interval = 2500 |
How frequently resources in transition states should be polled for updates. Expressed in milliseconds. |
angular_modules = [] |
A list of AngularJS modules to be loaded when Angular bootstraps. |
auto_fade_alerts = {'delay': [3000], 'fade_duration': [1500],
'types': []} |
If provided, will auto-fade the alert types specified. Valid alert
types include alert-default , alert-success , alert-info ,
alert-warning , alert-danger . Can also define the delay before
the alert fades and the fade out duration. |
bug_url = None |
Displays a “Report Bug” link in the site header which links to the value of this setting, ideally a URL containing information on how to report issues. |
dashboards = None |
If a list of dashboard slugs is provided in this setting, the
supplied ordering is applied to the list of discovered dashboards. |
default_dashboard = None |
The slug of the dashboard which should act as the fallback dashboard whenever a user logs in or is otherwise redirected to an ambiguous location. |
disable_password_reveal = False |
Setting this to True will disable the reveal button for password
fields, including on the login form. |
exceptions = {'unauthorized': [], 'not_found': [],
'recoverable': []} |
Classes of exceptions which the Dashboard’s centralized exception handling should be aware of. |
help_url = None |
Displays a “Help” link in the site header which links to the value of this setting, ideally a URL containing help information. |
js_files = [] |
A list of javascript source files to be included in the compressed set of files that are loaded on every page. |
js_spec_files = [] |
A list of JavaScript spec files to include for integration with the Jasmine spec runner. |
modal_backdrop = static |
Controls how bootstrap backdrop element outside of modals looks
and feels. Valid values are true , false and static . |
password_autocomplete = off |
Controls whether browser autocompletion should be enabled on the
login form. Valid values are on and off . |
password_validator = {'regex': '.*',
'help_text': _("Password is not accepted")} |
A dictionary, containing a regular expression used for password validation and help text, which will be displayed if the password does not pass validation. The help text should describe the password requirements if there are any. |
simple_ip_management = True |
Enable or disable simplified floating IP address management. |
user_home = settings.LOGIN_REDIRECT_URL |
Either a literal URL path, such as the default, or Python’s dotted string notation representing a function which evaluates the URL the user should be redirected to based on the attributes of the user. |
The following table shows a few key Django settings you should be aware of for the most basic of deployments.
Warning
This is not meant to be anywhere near a complete list of settings for Django. You should always consult the main Django documentation, especially with regards to deployment considerations and security best-practices.
Configuration option = Default value | Description |
---|---|
ALLOWED_HOSTS = ['localhost'] |
List of names or IP addresses of the hosts running the dashboard. |
DEBUG and TEMPLATE_DEBUG = True |
Controls whether unhandled exceptions should generate a generic
500 response or present the user with a pretty-formatted debug
information page. |
SECRET_KEY |
A unique and secret value for your deployment. Unless you are running a load-balancer with multiple Dashboard installations behind it, each Dashboard instance should have a unique secret key. |
SECURE_PROXY_SSL_HEADER , CSRF_COOKIE_SECURE
and SESSION_COOKIE_SECURE |
These three should be configured if you are deploying the Dashboard
with SSL. The values indicated in the default
openstack_dashboard/local/local_settings.py.example file
are generally safe to use. When CSRF_COOKIE_SECURE or
SESSION_COOKIE_SECURE are set to True , these attributes help
protect the session cookies from cross-site scripting. |
ADD_INSTALLED_APPS |
A list of Django applications to be prepended to the
INSTALLED_APPS setting. Allows extending the list of installed
applications without having to override it completely. |
The following settings inform the Dashboard of information about the other OpenStack projects which are part of the same cloud and control the behavior of specific dashboards, panels, API calls, and so on.
Configuration option = Default | Description |
---|---|
AUTHENTICATION_URLS = ['openstack_auth.urls'] |
A list of modules from which to collate authentication URLs from. |
API_RESULT_LIMIT = 1000 |
The maximum number of objects, for example Glance images to display on a single page before providing a paging element to paginate the results. |
API_RESULT_PAGE_SIZE = 20 |
Similar to API_RESULT_LIMIT . This setting controls the number
of items to be shown per page if API pagination support for this
exists. |
AVAILABLE_REGIONS = None |
A list of tuples which defines multiple regions. |
AVAILABLE_THEMES = [ ('default', 'Default',
'themes/default'), ('material', 'Material',
'themes/material') ] |
Configure this setting to tell horizon which theme to use. Horizon
contains two pre-configured themes. These themes are 'default' and
'material' . Horizon uses three tuples in a list to define multiple
themes. The tuple format is
('{{ theme_name }}', '{{ theme_label }}', '{{ theme_path }}') .
Configure theme_name to define the directory
that customized themes are collected into. The theme-label
is a user-facing label shown in the theme picker. Horizon uses
theme path as the static root of the theme. If you
want to include content other than static files in a theme
directory, but do not wish the content served up at
/{{ THEME_COLLECTION_DIR }}/{{ theme_name }} , create a subdirectory
named static . If your theme folder contains a subdirectory named
static , then horizon uses static/custom/static as the root
for content served at /static/custom . The static root of the theme
folder must always contain a _variables.scss file and
a _styles.scss file. These two files must contain or import
all the styles, bootstrap, and horizon-specific variables used in
the GUI. |
CONSOLE_TYPE = AUTO |
The type of in-browser console used to access the virtual machines.
Valid values are AUTO , VNC , SPICE , RDP , SERIAL ,
and None . None deactivates the in-browser console
and is available in Juno. SERIAL is available since Kilo. |
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024 |
The size of the chunk, in bytes, for downloading objects from the Object Storage service. |
INSTANCE_LOG_LENGTH = 35 |
The number of lines displayed for the log of an instance. Valid value must be a positive integer. |
CREATE_INSTANCE_FLAVOR_SORT = {'key':'ram'} |
When launching a new instance the default flavor is sorted by
RAM usage in ascending order. You can customize the sort order by
id , name , ram , disk and vcpus . You can also
insert any custom callback function and also provide a flag for
reverse sort. |
DEFAULT_THEME = default |
This setting configures which theme horizon uses if a theme
has not yet been selected in the theme picker. This also sets
the cookie value. This value represents the theme_name key used
when there are multiple themes available. Configure this setting
inside AVAILABLE_THEMES to make use of this theme. |
DROPDOWN_MAX_ITEMS = 30 |
The maximum number of items displayed in a dropdown. |
ENFORCE_PASSWORD_CHECK = False |
Displays an Admin Password field on the ‘Change Password’ form
to verify that it is indeed the admin logged-in who wants to change
the password. |
IMAGES_LIST_FILTER_TENANTS = None |
A list of dictionaries to add optional categories to the image fixed filters in the Images panel, based on project ownership. |
IMAGE_RESERVED_CUSTOM_PROPERTIES = [] |
A list of image custom property keys that should not be displayed in the Update Metadata tree. |
LAUNCH_INSTANCE_DEFAULTS = {"config_drive": False} |
A dictionary of settings which can be used to provide the default values for properties found in the Launch Instance modal. |
MESSAGES_PATH = None |
The absolute path to the directory where message files are collected. |
OPENSTACK_API_VERSIONS = {"data-processing": 1.1,
"identity": 2.0, "volume": 2, "compute": 2} |
Use this setting to force the dashboard to use a specific API version for a given service API. |
OPENSTACK_ENABLE_PASSWORD_RETRIEVE = False |
Enables or disables the instance action ‘Retrieve password’ allowing password retrieval from metadata service. |
OPENSTACK_ENDPOINT_TYPE = "publicURL" |
A string specifying the endpoint type to use for the endpoints in the Identity service catalog. |
OPENSTACK_HOST = "127.0.0.1" |
The hostname of the Identity service server used for authentication if you only have one region. This is often the only setting that needs to be set for a basic deployment. |
OPENSTACK_HYPERVISOR_FEATURES = {'can_set_mount_point': False,
'can_set_password': False, 'requires_keypair': False,} |
A dictionary of settings identifying the capabilities of the hypervisor of Compute service. |
OPENSTACK_IMAGE_BACKEND = {'image_formats': [
('', _('Select format')),
('aki', _('AKI - Amazon Kernel Image')),
('ami', _('AMI - Amazon Machine Image')),
('ari', _('ARI - Amazon Ramdisk Image')),
('docker', _('Docker')),
('iso', _('ISO - Optical Disk Image')),
('qcow2', _('QCOW2 - QEMU Emulator')),
('raw', _('Raw')),
('vdi', _('VDI')),
('vhd', _('VHD')),
('vmdk', _('VMDK'))]} |
Customizes features related to the Image service, such as the list of supported image formats. |
IMAGE_CUSTOM_PROPERTY_TITLES = {
"architecture": _("Architecture"),
"kernel_id": _("Kernel ID"),
"ramdisk_id": _("Ramdisk ID"),
"image_state": _("Euca2ools state"),
"project_id": _("Project ID"),
"image_type": _("Image Type")} |
Customizes the titles for image custom property attributes that appear on image detail pages. |
HORIZON_IMAGES_ALLOW_UPLOAD = True |
Enables/Disables local uploads to prevent filling up the disk on the dashboard server. |
OPENSTACK_KEYSTONE_BACKEND = {'name': 'native',
'can_edit_user': True, 'can_edit_project': True} |
A dictionary of settings identifying the capabilities of the auth backend for the Identity service. |
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" |
Overrides the default domain used when running on a single-domain model with version 3 of the Identity service. All entities will be created in the default domain. |
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" |
The role to be assigned to a user when they are added to a project.
The value must correspond to an existing role name in the
Identity service. In general, the value should match the
member_role_name defined in keystone.conf . |
OPENSTACK_KEYSTONE_ADMIN_ROLES = ["admin"] |
The list of roles that have administrator privileges in the OpenStack installation. This check is very basic and essentially only works with versions 2 and 3 of the Identity service with the default policy file. |
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = False |
When enabled, a user will be required to enter the Domain name in addition to username for login. Enabled if running on a multi-domain model. |
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST |
The full URL for the Identity service endpoint used for authentication. |
OPENSTACK_KEYSTONE_FEDERATION_MANAGEMENT = False |
Enables/Disables panels that provide the ability for users to manage Identity Providers (IdPs) and establish a set of rules to map federation protocol attributes to Identity API attributes. Requires version 3 and later of the Identity API. |
WEBSSO_ENABLED = False |
Enables/Disables Identity service web single-sign-on. Requires Identity service version 3and Django OpenStack Auth version 1.2.0 or later. |
WEBSSO_INITIAL_CHOICE = "credentials" |
Determines the default authentication mechanism. When a user lands on the login page, this is the first choice they will see. |
WEBSSO_CHOICES = (
("credentials", _("Keystone Credentials")),
("oidc", _("OpenID Connect")),
("saml2", _("Security Assertion Markup Language"))) |
List of authentication mechanisms available to the user. |
WEBSSO_IDP_MAPPING = {} |
A dictionary of specific identity provider and federation protocol combinations. |
OPENSTACK_CINDER_FEATURES = {'enable_backup': False} |
A dictionary of settings which can be used to enable optional services provided by the Block storage service. Currently, only the backup service is available. |
OPENSTACK_HEAT_STACK = {'enable_user_pass': True} |
A dictionary of settings to use with heat stacks. Currently,
the only setting available is enable_user_pass , which can be
used to disable the password field while launching the stack. |
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': True,
'enable_quotas': False,
'enable_firewall': True,
'enable_vpn': True,
'profile_support': None,
'supported_provider_types': ["*"],
'supported_vnic_types': ["*"],
'segmentation_id_range': {},
'enable_fip_topology_check': True,
'default_ipv4_subnet_pool_label': None,
'default_ipv6_subnet_pool_label': None,} |
A dictionary of settings which can be used to enable optional services provided by the Networking service and configure specific features. |
OPENSTACK_SSL_CACERT = None |
The CA certificate to be used for SSL verification. When set to
None , the default certificate on the system is used. |
OPENSTACK_SSL_NO_VERIFY = False |
Enable/Disable SSL certificate checks in the OpenStack clients. Useful for self-signed certificates. |
OPENSTACK_TOKEN_HASH_ALGORITHM = "md5" |
The hash algorithm to use for authentication tokens. |
OPENSTACK_TOKEN_HASH_ENABLED = True |
Hashing tokens from the Identity service keep the Dashboard session
data smaller, but it doesn’t work in some cases when using PKI tokens.
Uncomment this value and set it to False if using PKI tokens and
there are 401 errors due to token hashing. |
POLICY_FILES = {'identity': 'keystone_policy.json',
'compute': 'nova_policy.json'} |
The mapping of the contents of POLICY_FILES_PATH to service
types. When policy.json files are added to POLICY_FILES_PATH ,
they should be included here too. |
POLICY_FILES_PATH = os.path.join(ROOT_PATH, "conf") |
Where service based policy files are located. |
SESSION_TIMEOUT = 3600 |
A method to supersede the token timeout with a shorter dashboard session timeout in seconds. For example, if your token expires in 60 minutes, a value of 1800 will log users out after 30 minutes. |
SAHARA_AUTO_IP_ALLOCATION_ENABLED = False |
Notifies the Data processing system whether or not automatic IP
allocation is enabled. Set to True if you are running Compute
Networking with auto_assign_floating_ip = True . |
TROVE_ADD_USER_PERMS and TROVE_ADD_DATABASE_PERMS = [] |
Database service user and database extension support. |
WEBROOT = / |
The location where the access to the dashboard is configured in the web server. |
STATIC_ROOT = /static/ |
URL pointing to files in STATIC_ROOT . the value must end in "/" . |
THEME_COLLECTION_DIR = themes |
Horizon collects the available themes into a static directory
based on this variable setting. For example, the default theme
is accessible from /{{ STATIC_URL }}/themes/default . |
THEME_COOKIE_NAME = themes |
This setting determines which cookie key horizon sets to store the current theme. Cookie keys expire after one year elapses. |
DISALLOW_IFRAME_EMBED = True |
This setting can be used to defend against Clickjacking and prevent the Dashboard from being embedded within an iframe. |
OPENSTACK_NOVA_EXTENSIONS_BLACKLIST = [] |
Ignore all listed Compute service extensions, and behave as if they were unsupported. Can be used to selectively disable certain costly extensions for performance reasons. |
The following keys can be used in any pluggable settings file.
Configuration option | Description |
---|---|
ADD_EXCEPTIONS |
A dictionary of exception classes to be added to
HORIZON['exceptions'] . |
ADD_INSTALLED_APPS |
A list of applications to be prepended to INSTALLED_APPS . This
is needed to expose static files from a plugin. |
ADD_ANGULAR_MODULES |
A list of AngularJS modules to be loaded when Angular bootstraps. |
ADD_JS_FILES |
A list of javascript source files to be included in the compressed set of files that are loaded on every page. |
ADD_JS_SPEC_FILES |
A list of javascript spec files to include for integration with the Jasmine spec runner. |
ADD_SCSS_FILES |
A list of SCSS files to be included in the compressed set of files that are loaded on every page. |
AUTO_DISCOVER_STATIC_FILES |
If set to True , JavaScript files and static angular HTML
template files will be automatically discovered from the static
folder in each apps listed in ADD_INSTALLED_APPS . |
DISABLED |
If set to True , this settings file will not be added to the
settings. |
UPDATE_HORIZON_CONFIG |
A dictionary of values that will replace the values in
HORIZON_CONFIG . |
The following keys are specific to register a dashboard.
Configuration option | Description |
---|---|
DASHBOARD |
Required. The slug of the dashboard to be added to
HORIZON['dashboards'] . |
DEFAULT |
If set to True , this dashboard will be set as the default
dashboard. |
The following keys are specific to register or remove a panel.
Configuration option | Description |
---|---|
PANEL |
Required. The slug of the panel to be added to HORIZON_CONFIG . |
PANEL_DASHBOARD |
Required. The slug of the dashboard the PANEL is associated with. |
PANEL_GROUP |
The slug of the panel group the PANEL is associated with. If
you want the panel to show up without a panel group, use the panel
group default . |
DEFAULT_PANEL |
If set, it will update the default panel of the PANEL_DASHBOARD . |
ADD_PANEL |
Python panel class of the PANEL to be added. |
REMOVE_PANEL |
If set to True , the PANEL will be removed from
PANEL_DASHBOARD /PANEL_GROUP . |
The following keys are specific to register a panel group.
Configuration option | Description |
---|---|
PANEL_GROUP |
Required. The slug of the panel group to be added to
HORIZON_CONFIG . |
PANEL_GROUP_NAME |
Required. The display name of the PANEL_GROUP . |
PANEL_GROUP_DASHBOARD |
Required. The slug of the dashboard the PANEL_GROUP associated
with. |
Find the following files in /etc/openstack-dashboard
.
The keystone_policy.json
file defines additional access controls for
the dashboard that apply to the Identity service.
Note
The keystone_policy.json
file must match the Identity service
/etc/keystone/policy.json
policy file.
{
"admin_required": [
[
"role:admin"
],
[
"is_admin:1"
]
],
"service_role": [
[
"role:service"
]
],
"service_or_admin": [
[
"rule:admin_required"
],
[
"rule:service_role"
]
],
"owner": [
[
"user_id:%(user_id)s"
]
],
"admin_or_owner": [
[
"rule:admin_required"
],
[
"rule:owner"
]
],
"default": [
[
"rule:admin_required"
]
],
"identity:get_service": [
[
"rule:admin_required"
]
],
"identity:list_services": [
[
"rule:admin_required"
]
],
"identity:create_service": [
[
"rule:admin_required"
]
],
"identity:update_service": [
[
"rule:admin_required"
]
],
"identity:delete_service": [
[
"rule:admin_required"
]
],
"identity:get_endpoint": [
[
"rule:admin_required"
]
],
"identity:list_endpoints": [
[
"rule:admin_required"
]
],
"identity:create_endpoint": [
[
"rule:admin_required"
]
],
"identity:update_endpoint": [
[
"rule:admin_required"
]
],
"identity:delete_endpoint": [
[
"rule:admin_required"
]
],
"identity:get_domain": [
[
"rule:admin_required"
]
],
"identity:list_domains": [
[
"rule:admin_required"
]
],
"identity:create_domain": [
[
"rule:admin_required"
]
],
"identity:update_domain": [
[
"rule:admin_required"
]
],
"identity:delete_domain": [
[
"rule:admin_required"
]
],
"identity:get_project": [
[
"rule:admin_required"
]
],
"identity:list_projects": [
[
"rule:admin_required"
]
],
"identity:list_user_projects": [
[
"rule:admin_or_owner"
]
],
"identity:create_project": [
[
"rule:admin_required"
]
],
"identity:update_project": [
[
"rule:admin_required"
]
],
"identity:delete_project": [
[
"rule:admin_required"
]
],
"identity:get_user": [
[
"rule:admin_required"
]
],
"identity:list_users": [
[
"rule:admin_required"
]
],
"identity:create_user": [
[
"rule:admin_required"
]
],
"identity:update_user": [
[
"rule:admin_or_owner"
]
],
"identity:delete_user": [
[
"rule:admin_required"
]
],
"identity:get_group": [
[
"rule:admin_required"
]
],
"identity:list_groups": [
[
"rule:admin_required"
]
],
"identity:list_groups_for_user": [
[
"rule:admin_or_owner"
]
],
"identity:create_group": [
[
"rule:admin_required"
]
],
"identity:update_group": [
[
"rule:admin_required"
]
],
"identity:delete_group": [
[
"rule:admin_required"
]
],
"identity:list_users_in_group": [
[
"rule:admin_required"
]
],
"identity:remove_user_from_group": [
[
"rule:admin_required"
]
],
"identity:check_user_in_group": [
[
"rule:admin_required"
]
],
"identity:add_user_to_group": [
[
"rule:admin_required"
]
],
"identity:get_credential": [
[
"rule:admin_required"
]
],
"identity:list_credentials": [
[
"rule:admin_required"
]
],
"identity:create_credential": [
[
"rule:admin_required"
]
],
"identity:update_credential": [
[
"rule:admin_required"
]
],
"identity:delete_credential": [
[
"rule:admin_required"
]
],
"identity:get_role": [
[
"rule:admin_required"
]
],
"identity:list_roles": [
[
"rule:admin_required"
]
],
"identity:create_role": [
[
"rule:admin_required"
]
],
"identity:update_role": [
[
"rule:admin_required"
]
],
"identity:delete_role": [
[
"rule:admin_required"
]
],
"identity:check_grant": [
[
"rule:admin_required"
]
],
"identity:list_grants": [
[
"rule:admin_required"
]
],
"identity:create_grant": [
[
"rule:admin_required"
]
],
"identity:revoke_grant": [
[
"rule:admin_required"
]
],
"identity:list_role_assignments": [
[
"rule:admin_required"
]
],
"identity:get_policy": [
[
"rule:admin_required"
]
],
"identity:list_policies": [
[
"rule:admin_required"
]
],
"identity:create_policy": [
[
"rule:admin_required"
]
],
"identity:update_policy": [
[
"rule:admin_required"
]
],
"identity:delete_policy": [
[
"rule:admin_required"
]
],
"identity:check_token": [
[
"rule:admin_required"
]
],
"identity:validate_token": [
[
"rule:service_or_admin"
]
],
"identity:validate_token_head": [
[
"rule:service_or_admin"
]
],
"identity:revocation_list": [
[
"rule:service_or_admin"
]
],
"identity:revoke_token": [
[
"rule:admin_or_owner"
]
],
"identity:create_trust": [
[
"user_id:%(trust.trustor_user_id)s"
]
],
"identity:get_trust": [
[
"rule:admin_or_owner"
]
],
"identity:list_trusts": [
[
"@"
]
],
"identity:list_roles_for_trust": [
[
"@"
]
],
"identity:check_role_for_trust": [
[
"@"
]
],
"identity:get_role_for_trust": [
[
"@"
]
],
"identity:delete_trust": [
[
"@"
]
]
}
The nova_policy.json
file defines additional access controls for
the dashboard that apply to the Compute service.
Note
The nova_policy.json
file must match the Compute
/etc/nova/policy.json
policy file.
{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"cells_scheduler_filter:TargetCellFilter": "is_admin:True",
"compute:create": "",
"compute:create:attach_network": "",
"compute:create:attach_volume": "",
"compute:create:forced_host": "is_admin:True",
"compute:get": "",
"compute:get_all": "",
"compute:get_all_tenants": "is_admin:True",
"compute:update": "",
"compute:get_instance_metadata": "",
"compute:get_all_instance_metadata": "",
"compute:get_all_instance_system_metadata": "",
"compute:update_instance_metadata": "",
"compute:delete_instance_metadata": "",
"compute:get_instance_faults": "",
"compute:get_diagnostics": "",
"compute:get_instance_diagnostics": "",
"compute:start": "rule:admin_or_owner",
"compute:stop": "rule:admin_or_owner",
"compute:get_lock": "",
"compute:lock": "rule:admin_or_owner",
"compute:unlock": "rule:admin_or_owner",
"compute:unlock_override": "rule:admin_api",
"compute:get_vnc_console": "",
"compute:get_spice_console": "",
"compute:get_rdp_console": "",
"compute:get_serial_console": "",
"compute:get_mks_console": "",
"compute:get_console_output": "",
"compute:reset_network": "",
"compute:inject_network_info": "",
"compute:add_fixed_ip": "",
"compute:remove_fixed_ip": "",
"compute:attach_volume": "",
"compute:detach_volume": "",
"compute:swap_volume": "",
"compute:attach_interface": "",
"compute:detach_interface": "",
"compute:set_admin_password": "",
"compute:rescue": "",
"compute:unrescue": "",
"compute:suspend": "",
"compute:resume": "",
"compute:pause": "",
"compute:unpause": "",
"compute:shelve": "",
"compute:shelve_offload": "",
"compute:unshelve": "",
"compute:snapshot": "",
"compute:snapshot_volume_backed": "",
"compute:backup": "",
"compute:resize": "",
"compute:confirm_resize": "",
"compute:revert_resize": "",
"compute:rebuild": "",
"compute:reboot": "",
"compute:delete": "rule:admin_or_owner",
"compute:soft_delete": "rule:admin_or_owner",
"compute:force_delete": "rule:admin_or_owner",
"compute:security_groups:add_to_instance": "",
"compute:security_groups:remove_from_instance": "",
"compute:delete": "",
"compute:soft_delete": "",
"compute:force_delete": "",
"compute:restore": "",
"compute:volume_snapshot_create": "",
"compute:volume_snapshot_delete": "",
"admin_api": "is_admin:True",
"compute_extension:accounts": "rule:admin_api",
"compute_extension:admin_actions": "rule:admin_api",
"compute_extension:admin_actions:pause": "rule:admin_or_owner",
"compute_extension:admin_actions:unpause": "rule:admin_or_owner",
"compute_extension:admin_actions:suspend": "rule:admin_or_owner",
"compute_extension:admin_actions:resume": "rule:admin_or_owner",
"compute_extension:admin_actions:lock": "rule:admin_or_owner",
"compute_extension:admin_actions:unlock": "rule:admin_or_owner",
"compute_extension:admin_actions:resetNetwork": "rule:admin_api",
"compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
"compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
"compute_extension:admin_actions:migrateLive": "rule:admin_api",
"compute_extension:admin_actions:resetState": "rule:admin_api",
"compute_extension:admin_actions:migrate": "rule:admin_api",
"compute_extension:aggregates": "rule:admin_api",
"compute_extension:agents": "rule:admin_api",
"compute_extension:attach_interfaces": "",
"compute_extension:baremetal_nodes": "rule:admin_api",
"compute_extension:cells": "rule:admin_api",
"compute_extension:cells:create": "rule:admin_api",
"compute_extension:cells:delete": "rule:admin_api",
"compute_extension:cells:update": "rule:admin_api",
"compute_extension:cells:sync_instances": "rule:admin_api",
"compute_extension:certificates": "",
"compute_extension:cloudpipe": "rule:admin_api",
"compute_extension:cloudpipe_update": "rule:admin_api",
"compute_extension:config_drive": "",
"compute_extension:console_output": "",
"compute_extension:consoles": "",
"compute_extension:createserverext": "",
"compute_extension:deferred_delete": "",
"compute_extension:disk_config": "",
"compute_extension:evacuate": "rule:admin_api",
"compute_extension:extended_server_attributes": "rule:admin_api",
"compute_extension:extended_status": "",
"compute_extension:extended_availability_zone": "",
"compute_extension:extended_ips": "",
"compute_extension:extended_ips_mac": "",
"compute_extension:extended_vif_net": "",
"compute_extension:extended_volumes": "",
"compute_extension:fixed_ips": "rule:admin_api",
"compute_extension:flavor_access": "",
"compute_extension:flavor_access:addTenantAccess": "rule:admin_api",
"compute_extension:flavor_access:removeTenantAccess": "rule:admin_api",
"compute_extension:flavor_disabled": "",
"compute_extension:flavor_rxtx": "",
"compute_extension:flavor_swap": "",
"compute_extension:flavorextradata": "",
"compute_extension:flavorextraspecs:index": "",
"compute_extension:flavorextraspecs:show": "",
"compute_extension:flavorextraspecs:create": "rule:admin_api",
"compute_extension:flavorextraspecs:update": "rule:admin_api",
"compute_extension:flavorextraspecs:delete": "rule:admin_api",
"compute_extension:flavormanage": "rule:admin_api",
"compute_extension:floating_ip_dns": "",
"compute_extension:floating_ip_pools": "",
"compute_extension:floating_ips": "",
"compute_extension:floating_ips_bulk": "rule:admin_api",
"compute_extension:fping": "",
"compute_extension:fping:all_tenants": "rule:admin_api",
"compute_extension:hide_server_addresses": "is_admin:False",
"compute_extension:hosts": "rule:admin_api",
"compute_extension:hypervisors": "rule:admin_api",
"compute_extension:image_size": "",
"compute_extension:instance_actions": "",
"compute_extension:instance_actions:events": "rule:admin_api",
"compute_extension:instance_usage_audit_log": "rule:admin_api",
"compute_extension:keypairs": "",
"compute_extension:keypairs:index": "",
"compute_extension:keypairs:show": "",
"compute_extension:keypairs:create": "",
"compute_extension:keypairs:delete": "",
"compute_extension:multinic": "",
"compute_extension:networks": "rule:admin_api",
"compute_extension:networks:view": "",
"compute_extension:networks_associate": "rule:admin_api",
"compute_extension:os-tenant-networks": "",
"compute_extension:quotas:show": "",
"compute_extension:quotas:update": "rule:admin_api",
"compute_extension:quotas:delete": "rule:admin_api",
"compute_extension:quota_classes": "",
"compute_extension:rescue": "",
"compute_extension:security_group_default_rules": "rule:admin_api",
"compute_extension:security_groups": "",
"compute_extension:server_diagnostics": "rule:admin_api",
"compute_extension:server_groups": "",
"compute_extension:server_password": "",
"compute_extension:server_usage": "",
"compute_extension:services": "rule:admin_api",
"compute_extension:shelve": "",
"compute_extension:shelveOffload": "rule:admin_api",
"compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
"compute_extension:simple_tenant_usage:list": "rule:admin_api",
"compute_extension:unshelve": "",
"compute_extension:users": "rule:admin_api",
"compute_extension:virtual_interfaces": "",
"compute_extension:virtual_storage_arrays": "",
"compute_extension:volumes": "",
"compute_extension:volume_attachments:index": "",
"compute_extension:volume_attachments:show": "",
"compute_extension:volume_attachments:create": "",
"compute_extension:volume_attachments:update": "",
"compute_extension:volume_attachments:delete": "",
"compute_extension:volumetypes": "",
"compute_extension:availability_zone:list": "",
"compute_extension:availability_zone:detail": "rule:admin_api",
"compute_extension:used_limits_for_admin": "rule:admin_api",
"compute_extension:migrations:index": "rule:admin_api",
"compute_extension:os-assisted-volume-snapshots:create": "rule:admin_api",
"compute_extension:os-assisted-volume-snapshots:delete": "rule:admin_api",
"compute_extension:console_auth_tokens": "rule:admin_api",
"compute_extension:os-server-external-events:create": "rule:admin_api",
"network:get_all": "",
"network:get": "",
"network:create": "",
"network:delete": "",
"network:associate": "",
"network:disassociate": "",
"network:get_vifs_by_instance": "",
"network:allocate_for_instance": "",
"network:deallocate_for_instance": "",
"network:validate_networks": "",
"network:get_instance_uuids_by_ip_filter": "",
"network:get_instance_id_by_floating_address": "",
"network:setup_networks_on_host": "",
"network:get_backdoor_port": "",
"network:get_floating_ip": "",
"network:get_floating_ip_pools": "",
"network:get_floating_ip_by_address": "",
"network:get_floating_ips_by_project": "",
"network:get_floating_ips_by_fixed_address": "",
"network:allocate_floating_ip": "",
"network:associate_floating_ip": "",
"network:disassociate_floating_ip": "",
"network:release_floating_ip": "",
"network:migrate_instance_start": "",
"network:migrate_instance_finish": "",
"network:get_fixed_ip": "",
"network:get_fixed_ip_by_address": "",
"network:add_fixed_ip_to_instance": "",
"network:remove_fixed_ip_from_instance": "",
"network:add_network_to_project": "",
"network:get_instance_nw_info": "",
"network:get_dns_domains": "",
"network:add_dns_entry": "",
"network:modify_dns_entry": "",
"network:delete_dns_entry": "",
"network:get_dns_entries_by_address": "",
"network:get_dns_entries_by_name": "",
"network:create_private_dns_domain": "",
"network:create_public_dns_domain": "",
"network:delete_dns_domain": "",
"network:attach_external_network": "rule:admin_api",
"network:get_vif_by_mac_address": "",
"os_compute_api:servers:detail:get_all_tenants": "is_admin:True",
"os_compute_api:servers:index:get_all_tenants": "is_admin:True",
"os_compute_api:servers:confirm_resize": "",
"os_compute_api:servers:create": "",
"os_compute_api:servers:create:attach_network": "",
"os_compute_api:servers:create:attach_volume": "",
"os_compute_api:servers:create:forced_host": "rule:admin_api",
"os_compute_api:servers:delete": "",
"os_compute_api:servers:update": "",
"os_compute_api:servers:detail": "",
"os_compute_api:servers:index": "",
"os_compute_api:servers:reboot": "",
"os_compute_api:servers:rebuild": "",
"os_compute_api:servers:resize": "",
"os_compute_api:servers:revert_resize": "",
"os_compute_api:servers:show": "",
"os_compute_api:servers:create_image": "",
"os_compute_api:servers:create_image:allow_volume_backed": "",
"os_compute_api:servers:start": "rule:admin_or_owner",
"os_compute_api:servers:stop": "rule:admin_or_owner",
"os_compute_api:os-access-ips:discoverable": "",
"os_compute_api:os-access-ips": "",
"os_compute_api:os-admin-actions": "rule:admin_api",
"os_compute_api:os-admin-actions:discoverable": "",
"os_compute_api:os-admin-actions:reset_network": "rule:admin_api",
"os_compute_api:os-admin-actions:inject_network_info": "rule:admin_api",
"os_compute_api:os-admin-actions:reset_state": "rule:admin_api",
"os_compute_api:os-admin-password": "",
"os_compute_api:os-admin-password:discoverable": "",
"os_compute_api:os-aggregates:discoverable": "",
"os_compute_api:os-aggregates:index": "rule:admin_api",
"os_compute_api:os-aggregates:create": "rule:admin_api",
"os_compute_api:os-aggregates:show": "rule:admin_api",
"os_compute_api:os-aggregates:update": "rule:admin_api",
"os_compute_api:os-aggregates:delete": "rule:admin_api",
"os_compute_api:os-aggregates:add_host": "rule:admin_api",
"os_compute_api:os-aggregates:remove_host": "rule:admin_api",
"os_compute_api:os-aggregates:set_metadata": "rule:admin_api",
"os_compute_api:os-agents": "rule:admin_api",
"os_compute_api:os-agents:discoverable": "",
"os_compute_api:os-attach-interfaces": "",
"os_compute_api:os-attach-interfaces:discoverable": "",
"os_compute_api:os-baremetal-nodes": "rule:admin_api",
"os_compute_api:os-baremetal-nodes:discoverable": "",
"os_compute_api:os-block-device-mapping-v1:discoverable": "",
"os_compute_api:os-cells": "rule:admin_api",
"os_compute_api:os-cells:create": "rule:admin_api",
"os_compute_api:os-cells:delete": "rule:admin_api",
"os_compute_api:os-cells:update": "rule:admin_api",
"os_compute_api:os-cells:sync_instances": "rule:admin_api",
"os_compute_api:os-cells:discoverable": "",
"os_compute_api:os-certificates:create": "",
"os_compute_api:os-certificates:show": "",
"os_compute_api:os-certificates:discoverable": "",
"os_compute_api:os-cloudpipe": "rule:admin_api",
"os_compute_api:os-cloudpipe:discoverable": "",
"os_compute_api:os-config-drive": "",
"os_compute_api:os-consoles:discoverable": "",
"os_compute_api:os-consoles:create": "",
"os_compute_api:os-consoles:delete": "",
"os_compute_api:os-consoles:index": "",
"os_compute_api:os-consoles:show": "",
"os_compute_api:os-console-output:discoverable": "",
"os_compute_api:os-console-output": "",
"os_compute_api:os-remote-consoles": "",
"os_compute_api:os-remote-consoles:discoverable": "",
"os_compute_api:os-create-backup:discoverable": "",
"os_compute_api:os-create-backup": "rule:admin_or_owner",
"os_compute_api:os-deferred-delete": "",
"os_compute_api:os-deferred-delete:discoverable": "",
"os_compute_api:os-disk-config": "",
"os_compute_api:os-disk-config:discoverable": "",
"os_compute_api:os-evacuate": "rule:admin_api",
"os_compute_api:os-evacuate:discoverable": "",
"os_compute_api:os-extended-server-attributes": "rule:admin_api",
"os_compute_api:os-extended-server-attributes:discoverable": "",
"os_compute_api:os-extended-status": "",
"os_compute_api:os-extended-status:discoverable": "",
"os_compute_api:os-extended-availability-zone": "",
"os_compute_api:os-extended-availability-zone:discoverable": "",
"os_compute_api:extensions": "",
"os_compute_api:extension_info:discoverable": "",
"os_compute_api:os-extended-volumes": "",
"os_compute_api:os-extended-volumes:discoverable": "",
"os_compute_api:os-fixed-ips": "rule:admin_api",
"os_compute_api:os-fixed-ips:discoverable": "",
"os_compute_api:os-flavor-access": "",
"os_compute_api:os-flavor-access:discoverable": "",
"os_compute_api:os-flavor-access:remove_tenant_access": "rule:admin_api",
"os_compute_api:os-flavor-access:add_tenant_access": "rule:admin_api",
"os_compute_api:os-flavor-rxtx": "",
"os_compute_api:os-flavor-rxtx:discoverable": "",
"os_compute_api:flavors:discoverable": "",
"os_compute_api:os-flavor-extra-specs:discoverable": "",
"os_compute_api:os-flavor-extra-specs:index": "",
"os_compute_api:os-flavor-extra-specs:show": "",
"os_compute_api:os-flavor-extra-specs:create": "rule:admin_api",
"os_compute_api:os-flavor-extra-specs:update": "rule:admin_api",
"os_compute_api:os-flavor-extra-specs:delete": "rule:admin_api",
"os_compute_api:os-flavor-manage:discoverable": "",
"os_compute_api:os-flavor-manage": "rule:admin_api",
"os_compute_api:os-floating-ip-dns": "",
"os_compute_api:os-floating-ip-dns:discoverable": "",
"os_compute_api:os-floating-ip-dns:domain:update": "rule:admin_api",
"os_compute_api:os-floating-ip-dns:domain:delete": "rule:admin_api",
"os_compute_api:os-floating-ip-pools": "",
"os_compute_api:os-floating-ip-pools:discoverable": "",
"os_compute_api:os-floating-ips": "",
"os_compute_api:os-floating-ips:discoverable": "",
"os_compute_api:os-floating-ips-bulk": "rule:admin_api",
"os_compute_api:os-floating-ips-bulk:discoverable": "",
"os_compute_api:os-fping": "",
"os_compute_api:os-fping:discoverable": "",
"os_compute_api:os-fping:all_tenants": "rule:admin_api",
"os_compute_api:os-hide-server-addresses": "is_admin:False",
"os_compute_api:os-hide-server-addresses:discoverable": "",
"os_compute_api:os-hosts": "rule:admin_api",
"os_compute_api:os-hosts:discoverable": "",
"os_compute_api:os-hypervisors": "rule:admin_api",
"os_compute_api:os-hypervisors:discoverable": "",
"os_compute_api:images:discoverable": "",
"os_compute_api:image-size": "",
"os_compute_api:image-size:discoverable": "",
"os_compute_api:os-instance-actions": "",
"os_compute_api:os-instance-actions:discoverable": "",
"os_compute_api:os-instance-actions:events": "rule:admin_api",
"os_compute_api:os-instance-usage-audit-log": "rule:admin_api",
"os_compute_api:os-instance-usage-audit-log:discoverable": "",
"os_compute_api:ips:discoverable": "",
"os_compute_api:ips:index": "rule:admin_or_owner",
"os_compute_api:ips:show": "rule:admin_or_owner",
"os_compute_api:os-keypairs:discoverable": "",
"os_compute_api:os-keypairs": "",
"os_compute_api:os-keypairs:index": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:os-keypairs:show": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:os-keypairs:create": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:os-keypairs:delete": "rule:admin_api or user_id:%(user_id)s",
"os_compute_api:limits:discoverable": "",
"os_compute_api:limits": "",
"os_compute_api:os-lock-server:discoverable": "",
"os_compute_api:os-lock-server:lock": "rule:admin_or_owner",
"os_compute_api:os-lock-server:unlock": "rule:admin_or_owner",
"os_compute_api:os-lock-server:unlock:unlock_override": "rule:admin_api",
"os_compute_api:os-migrate-server:discoverable": "",
"os_compute_api:os-migrate-server:migrate": "rule:admin_api",
"os_compute_api:os-migrate-server:migrate_live": "rule:admin_api",
"os_compute_api:os-multinic": "",
"os_compute_api:os-multinic:discoverable": "",
"os_compute_api:os-networks": "rule:admin_api",
"os_compute_api:os-networks:view": "",
"os_compute_api:os-networks:discoverable": "",
"os_compute_api:os-networks-associate": "rule:admin_api",
"os_compute_api:os-networks-associate:discoverable": "",
"os_compute_api:os-pause-server:discoverable": "",
"os_compute_api:os-pause-server:pause": "rule:admin_or_owner",
"os_compute_api:os-pause-server:unpause": "rule:admin_or_owner",
"os_compute_api:os-pci:pci_servers": "",
"os_compute_api:os-pci:discoverable": "",
"os_compute_api:os-pci:index": "rule:admin_api",
"os_compute_api:os-pci:detail": "rule:admin_api",
"os_compute_api:os-pci:show": "rule:admin_api",
"os_compute_api:os-personality:discoverable": "",
"os_compute_api:os-preserve-ephemeral-rebuild:discoverable": "",
"os_compute_api:os-quota-sets:discoverable": "",
"os_compute_api:os-quota-sets:show": "rule:admin_or_owner",
"os_compute_api:os-quota-sets:defaults": "",
"os_compute_api:os-quota-sets:update": "rule:admin_api",
"os_compute_api:os-quota-sets:delete": "rule:admin_api",
"os_compute_api:os-quota-sets:detail": "rule:admin_api",
"os_compute_api:os-quota-class-sets:update": "rule:admin_api",
"os_compute_api:os-quota-class-sets:show": "is_admin:True or quota_class:%(quota_class)s",
"os_compute_api:os-quota-class-sets:discoverable": "",
"os_compute_api:os-rescue": "",
"os_compute_api:os-rescue:discoverable": "",
"os_compute_api:os-scheduler-hints:discoverable": "",
"os_compute_api:os-security-group-default-rules:discoverable": "",
"os_compute_api:os-security-group-default-rules": "rule:admin_api",
"os_compute_api:os-security-groups": "",
"os_compute_api:os-security-groups:discoverable": "",
"os_compute_api:os-server-diagnostics": "rule:admin_api",
"os_compute_api:os-server-diagnostics:discoverable": "",
"os_compute_api:os-server-password": "",
"os_compute_api:os-server-password:discoverable": "",
"os_compute_api:os-server-usage": "",
"os_compute_api:os-server-usage:discoverable": "",
"os_compute_api:os-server-groups": "",
"os_compute_api:os-server-groups:discoverable": "",
"os_compute_api:os-services": "rule:admin_api",
"os_compute_api:os-services:discoverable": "",
"os_compute_api:server-metadata:discoverable": "",
"os_compute_api:server-metadata:index": "rule:admin_or_owner",
"os_compute_api:server-metadata:show": "rule:admin_or_owner",
"os_compute_api:server-metadata:delete": "rule:admin_or_owner",
"os_compute_api:server-metadata:create": "rule:admin_or_owner",
"os_compute_api:server-metadata:update": "rule:admin_or_owner",
"os_compute_api:server-metadata:update_all": "rule:admin_or_owner",
"os_compute_api:servers:discoverable": "",
"os_compute_api:os-shelve:shelve": "",
"os_compute_api:os-shelve:shelve:discoverable": "",
"os_compute_api:os-shelve:shelve_offload": "rule:admin_api",
"os_compute_api:os-simple-tenant-usage:discoverable": "",
"os_compute_api:os-simple-tenant-usage:show": "rule:admin_or_owner",
"os_compute_api:os-simple-tenant-usage:list": "rule:admin_api",
"os_compute_api:os-suspend-server:discoverable": "",
"os_compute_api:os-suspend-server:suspend": "rule:admin_or_owner",
"os_compute_api:os-suspend-server:resume": "rule:admin_or_owner",
"os_compute_api:os-tenant-networks": "rule:admin_or_owner",
"os_compute_api:os-tenant-networks:discoverable": "",
"os_compute_api:os-shelve:unshelve": "",
"os_compute_api:os-user-data:discoverable": "",
"os_compute_api:os-virtual-interfaces": "",
"os_compute_api:os-virtual-interfaces:discoverable": "",
"os_compute_api:os-volumes": "",
"os_compute_api:os-volumes:discoverable": "",
"os_compute_api:os-volumes-attachments:index": "",
"os_compute_api:os-volumes-attachments:show": "",
"os_compute_api:os-volumes-attachments:create": "",
"os_compute_api:os-volumes-attachments:update": "",
"os_compute_api:os-volumes-attachments:delete": "",
"os_compute_api:os-volumes-attachments:discoverable": "",
"os_compute_api:os-availability-zone:list": "",
"os_compute_api:os-availability-zone:discoverable": "",
"os_compute_api:os-availability-zone:detail": "rule:admin_api",
"os_compute_api:os-used-limits": "rule:admin_api",
"os_compute_api:os-used-limits:discoverable": "",
"os_compute_api:os-migrations:index": "rule:admin_api",
"os_compute_api:os-migrations:discoverable": "",
"os_compute_api:os-assisted-volume-snapshots:create": "rule:admin_api",
"os_compute_api:os-assisted-volume-snapshots:delete": "rule:admin_api",
"os_compute_api:os-assisted-volume-snapshots:discoverable": "",
"os_compute_api:os-console-auth-tokens": "rule:admin_api",
"os_compute_api:os-server-external-events:create": "rule:admin_api"
}
The dashboard is served to users through the Apache HTTP Server(httpd).
As a result, dashboard-related logs appear in files in the
/var/log/httpd
or /var/log/apache2
directory on the system
where the dashboard is hosted. The following table describes these files:
Log file | Description |
---|---|
access_log |
Logs all attempts to access the web server. |
error_log |
Logs all unsuccessful attempts to access the web server, along with the reason that each attempt failed. |
/var/log/horizon/horizon.log |
Log of certain user interactions |
This chapter describes how to configure the Dashboard with Apache web server.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The following options allow configuration of the APIs that Data Processing service supports.
Configuration option = Default value | Description |
---|---|
[oslo_messaging_rabbit] | |
connection_factory = single |
(String) Connection factory implementation |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[retries] | |
retries_number = 5 |
(Integer) Number of times to retry the request to client before failing |
retry_after = 10 |
(Integer) Time between the retries to client (in seconds). |
The following tables provide a comprehensive list of the Data Processing service configuration options:
Configuration option = Default value | Description |
---|---|
[cinder] | |
api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to cinder. |
api_version = 2 |
(Integer) Version of the Cinder API to use. |
ca_file = None |
(String) Location of ca certificates file to use for cinder client requests. |
endpoint_type = internalURL |
(String) Endpoint type for cinder client requests |
[glance] | |
api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to glance. |
ca_file = None |
(String) Location of ca certificates file to use for glance client requests. |
endpoint_type = internalURL |
(String) Endpoint type for glance client requests |
[heat] | |
api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to heat. |
ca_file = None |
(String) Location of ca certificates file to use for heat client requests. |
endpoint_type = internalURL |
(String) Endpoint type for heat client requests |
[keystone] | |
api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to keystone. |
ca_file = None |
(String) Location of ca certificates file to use for keystone client requests. |
endpoint_type = internalURL |
(String) Endpoint type for keystone client requests |
[manila] | |
api_insecure = True |
(Boolean) Allow to perform insecure SSL requests to manila. |
api_version = 1 |
(Integer) Version of the manila API to use. |
ca_file = None |
(String) Location of ca certificates file to use for manila client requests. |
[neutron] | |
api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to neutron. |
ca_file = None |
(String) Location of ca certificates file to use for neutron client requests. |
endpoint_type = internalURL |
(String) Endpoint type for neutron client requests |
[nova] | |
api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to nova. |
ca_file = None |
(String) Location of ca certificates file to use for nova client requests. |
endpoint_type = internalURL |
(String) Endpoint type for nova client requests |
[swift] | |
api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to swift. |
ca_file = None |
(String) Location of ca certificates file to use for swift client requests. |
endpoint_type = internalURL |
(String) Endpoint type for swift client requests |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
admin_project_domain_name = default |
(String) The name of the domain for the service project(ex. tenant). |
admin_user_domain_name = default |
(String) The name of the domain to which the admin user belongs. |
api_workers = 1 |
(Integer) Number of workers for Sahara API service (0 means all-in-one-thread configuration). |
cleanup_time_for_incomplete_clusters = 0 |
(Integer) Maximal time (in hours) for clusters allowed to be in states other than “Active”, “Deleting” or “Error”. If a cluster is not in “Active”, “Deleting” or “Error” state and last update of it was longer than “cleanup_time_for_incomplete_clusters” hours ago then it will be deleted automatically. (0 value means that automatic clean up is disabled). |
cluster_remote_threshold = 70 |
(Integer) The same as global_remote_threshold, but for a single cluster. |
compute_topology_file = etc/sahara/compute.topology |
(String) File with nova compute topology. It should contain mapping between nova computes and racks. |
coordinator_heartbeat_interval = 1 |
(Integer) Interval size between heartbeat execution in seconds. Heartbeats are executed to make sure that connection to the coordination server is active. |
default_ntp_server = pool.ntp.org |
(String) Default ntp server for time sync |
disable_event_log = False |
(Boolean) Disables event log feature. |
edp_internal_db_enabled = True |
(Boolean) Use Sahara internal db to store job binaries. |
enable_data_locality = False |
(Boolean) Enables data locality for hadoop cluster. Also enables data locality for Swift used by hadoop. If enabled, ‘compute_topology’ and ‘swift_topology’ configuration parameters should point to OpenStack and Swift topology correspondingly. |
enable_hypervisor_awareness = True |
(Boolean) Enables four-level topology for data locality. Works only if corresponding plugin supports such mode. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
global_remote_threshold = 100 |
(Integer) Maximum number of remote operations that will be running at the same time. Note that each remote operation requires its own process to run. |
hash_ring_replicas_count = 40 |
(Integer) Number of points that belongs to each member on a hash ring. The larger number leads to a better distribution. |
heat_enable_wait_condition = True |
(Boolean) Enable wait condition feature to reduce polling during cluster creation |
heat_stack_tags = data-processing-cluster |
(List) List of tags to be used during operating with stack. |
image = None |
(String) The path to an image to modify. This image will be modified in-place: be sure to target a copy if you wish to maintain a clean master image. |
job_binary_max_KB = 5120 |
(Integer) Maximum length of job binary data in kilobytes that may be stored or retrieved in a single operation. |
job_canceling_timeout = 300 |
(Integer) Timeout for canceling job execution (in seconds). Sahara will try to cancel job execution during this time. |
job_workflow_postfix = |
(String) Postfix for storing jobs in hdfs. Will be added to ‘/user/<hdfs user>/’ path. |
min_transient_cluster_active_time = 30 |
(Integer) Minimal “lifetime” in seconds for a transient cluster. Cluster is guaranteed to be “alive” within this time period. |
nameservers = |
(List) IP addresses of Designate nameservers. This is required if ‘use_designate’ is True |
node_domain = novalocal |
(String) The suffix of the node’s FQDN. In nova-network that is the dhcp_domain config parameter. |
os_region_name = None |
(String) Region name used to get services endpoints. |
periodic_coordinator_backend_url = None |
(String) The backend URL to use for distributed periodic tasks coordination. |
periodic_enable = True |
(Boolean) Enable periodic tasks. |
periodic_fuzzy_delay = 60 |
(Integer) Range in seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0). |
periodic_interval_max = 60 |
(Integer) Max interval size between periodic tasks execution in seconds. |
periodic_workers_number = 1 |
(Integer) Number of threads to run periodic tasks. |
plugins = vanilla, spark, cdh, ambari, storm, mapr |
(List) List of plugins to be loaded. Sahara preserves the order of the list when returning it. |
proxy_command = |
(String) Proxy command used to connect to instances. If set, this command should open a netcat socket, that Sahara will use for SSH and HTTP connections. Use {host} and {port} to describe the destination. Other available keywords: {tenant_id}, {network_id}, {router_id}. |
remote = ssh |
(String) A method for Sahara to execute commands on VMs. |
root_fs = None |
(String) The filesystem to mount as the root volume on the image. Novalue is required if only one filesystem is detected. |
rootwrap_command = sudo sahara-rootwrap /etc/sahara/rootwrap.conf |
(String) Rootwrap command to leverage. Use in conjunction with use_rootwrap=True |
swift_topology_file = etc/sahara/swift.topology |
(String) File with Swift topology.It should contain mapping between Swift nodes and racks. |
test_only = False |
(Boolean) If this flag is set, no changes will be made to the image; instead, the script will fail if discrepancies are found between the image and the intended state. |
use_barbican_key_manager = False |
(Boolean) Enable the usage of the OpenStack Key Management service provided by barbican. |
use_designate = False |
(Boolean) Use Designate for internal and external hostnames resolution |
use_floating_ips = True |
(Boolean) If set to True, Sahara will use floating IPs to communicate with instances. To make sure that all instances have floating IPs assigned in Nova Network set “auto_assign_floating_ip=True” in nova.conf. If Neutron is used for networking, make sure that all Node Groups have “floating_ip_pool” parameter defined. |
use_identity_api_v3 = True |
(Boolean) Enables Sahara to use Keystone API v3. If that flag is disabled, per-job clusters will not be terminated automatically. |
use_namespaces = False |
(Boolean) Use network namespaces for communication (only valid to use in conjunction with use_neutron=True). |
use_neutron = False |
(Boolean) Use Neutron Networking (False indicates the use of Nova networking). |
use_rootwrap = False |
(Boolean) Use rootwrap facility to allow non-root users to run the sahara services and access private network IPs (only valid to use in conjunction with use_namespaces=True) |
use_router_proxy = True |
(Boolean) Use ROUTER remote proxy. |
[castellan] | |
barbican_api_endpoint = None |
(String) The endpoint to use for connecting to the barbican api controller. By default, castellan will use the URL from the service catalog. |
barbican_api_version = v1 |
(String) Version of the barbican API, for example: “v1” |
[cluster_verifications] | |
verification_enable = True |
(Boolean) Option to enable verifications for all clusters |
verification_periodic_interval = 600 |
(Integer) Interval between two consecutive periodic tasks forverifications, in seconds. |
[conductor] | |
use_local = True |
(Boolean) Perform sahara-conductor operations locally. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
proxy_user_domain_name = None |
(String) The domain Sahara will use to create new proxy users for Swift object access. |
proxy_user_role_names = Member |
(List) A list of the role names that the proxy user should assume through trust for Swift object access. |
use_domain_for_proxy_users = False |
(Boolean) Enables Sahara to use a domain for creating temporary proxy users to access Swift. If this is enabled a domain must be created for Sahara to use. |
Configuration option = Default value | Description |
---|---|
[object_store_access] | |
public_identity_ca_file = None |
(String) Location of ca certificate file to use for identity client requests via public endpoint |
public_object_store_ca_file = None |
(String) Location of ca certificate file to use for object-store client requests via public endpoint |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[timeouts] | |
delete_instances_timeout = 10800 |
(Integer) Wait for instances to be deleted, in seconds |
detach_volume_timeout = 300 |
(Integer) Timeout for detaching volumes from instance, in seconds |
ips_assign_timeout = 10800 |
(Integer) Assign IPs timeout, in seconds |
wait_until_accessible = 10800 |
(Integer) Wait for instance accessibility, in seconds |
Option = default value | (Type) Help string |
---|---|
[DEFAULT] edp_internal_db_enabled = True |
(BoolOpt) Use Sahara internal db to store job binaries. |
[DEFAULT] image = None |
(StrOpt) The path to an image to modify. This image will be modified in-place: be sure to target a copy if you wish to maintain a clean master image. |
[DEFAULT] nameservers = |
(ListOpt) IP addresses of Designate nameservers. This is required if ‘use_designate’ is True |
[DEFAULT] root_fs = None |
(StrOpt) The filesystem to mount as the root volume on the image. Novalue is required if only one filesystem is detected. |
[DEFAULT] test_only = False |
(BoolOpt) If this flag is set, no changes will be made to the image; instead, the script will fail if discrepancies are found between the image and the intended state. |
[DEFAULT] use_designate = False |
(BoolOpt) Use Designate for internal and external hostnames resolution |
[glance] api_insecure = False |
(BoolOpt) Allow to perform insecure SSL requests to glance. |
[glance] ca_file = None |
(StrOpt) Location of ca certificates file to use for glance client requests. |
[glance] endpoint_type = internalURL |
(StrOpt) Endpoint type for glance client requests |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] plugins |
vanilla, spark, cdh, ambari |
vanilla, spark, cdh, ambari, storm, mapr |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
The Data Processing service (sahara) provides a scalable data-processing stack and associated management interfaces.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
Use the options to configure the used databases:
Configuration option = Default value | Description |
---|---|
[cassandra] | |
api_strategy = trove.common.strategies.cluster.experimental.cassandra.api.CassandraAPIStrategy |
(String) Class that implements datastore-specific API logic. |
backup_incremental_strategy = {} |
(Dict) Incremental strategy based on the default backup strategy. For strategies that do not implement incremental backups, the runner performs full backup instead. |
backup_namespace = trove.guestagent.strategies.backup.experimental.cassandra_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = NodetoolSnapshot |
(String) Default strategy to perform backups. |
cluster_support = True |
(Boolean) Enable clusters to be created and managed. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = system |
(String) List of Guest Logs to expose for publishing. |
guestagent_strategy = trove.common.strategies.cluster.experimental.cassandra.guestagent.CassandraGuestAgentStrategy |
(String) Class that implements datastore-specific Guest Agent API logic. |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = system, system_auth, system_traces |
(List) Databases to exclude when listing databases. |
ignore_users = os_admin |
(List) Users to exclude when listing users. |
mount_point = /var/lib/cassandra |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_strategy = None |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.cassandra_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.cassandra.service.CassandraRootController |
(String) Root controller implementation for Cassandra. |
system_log_level = INFO |
(String) Cassandra log verbosity. |
taskmanager_strategy = trove.common.strategies.cluster.experimental.cassandra.taskmanager.CassandraTaskManagerStrategy |
(String) Class that implements datastore-specific task manager logic. |
tcp_ports = 7000, 7001, 7199, 9042, 9160 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[couchbase] | |
backup_incremental_strategy = {} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.experimental.couchbase_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = CbBackup |
(String) Default strategy to perform backups. |
default_password_length = 24 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = |
(String) List of Guest Logs to expose for publishing. |
icmp = False |
(Boolean) Whether to permit ICMP. |
mount_point = /var/lib/couchbase |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_strategy = None |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.couchbase_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.common.service.DefaultRootController |
(String) Root controller implementation for couchbase. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field. |
tcp_ports = 8091, 8092, 4369, 11209-11211, 21100-21199 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
[couchdb] | |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[couchdb] | |
backup_incremental_strategy = {} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.experimental.couchdb_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = CouchDBBackup |
(String) Default strategy to perform backups. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = |
(String) List of Guest Logs to expose for publishing. |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = _users, _replicator |
(List) Databases to exclude when listing databases. |
ignore_users = os_admin, root |
(List) Users to exclude when listing users. |
mount_point = /var/lib/couchdb |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_strategy = None |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.couchdb_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.common.service.DefaultRootController |
(String) Root controller implementation for couchdb. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the “password” field. |
tcp_ports = 5984 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
Configuration option = Default value | Description |
---|---|
[db2] | |
backup_incremental_strategy = {} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.experimental.db2_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = DB2OfflineBackup |
(String) Default strategy to perform backups. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = |
(String) List of Guest Logs to expose for publishing. |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_users = PUBLIC, DB2INST1 |
(List) No help text available for this option. |
mount_point = /home/db2inst1/db2inst1 |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_strategy = None |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.db2_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.common.service.DefaultRootController |
(String) Root controller implementation for db2. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field. |
tcp_ports = 50000 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[mariadb] | |
api_strategy = trove.common.strategies.cluster.experimental.galera_common.api.GaleraCommonAPIStrategy |
(String) Class that implements datastore-specific API logic. |
backup_incremental_strategy = {'MariaDBInnoBackupEx': 'MariaDBInnoBackupExIncremental'} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.experimental.mariadb_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = MariaDBInnoBackupEx |
(String) Default strategy to perform backups. |
cluster_support = True |
(Boolean) Enable clusters to be created and managed. |
default_password_length = ${mysql.default_password_length} |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = general,slow_query |
(String) List of Guest Logs to expose for publishing. |
guest_log_long_query_time = 1000 |
(Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time |
guestagent_strategy = trove.common.strategies.cluster.experimental.galera_common.guestagent.GaleraCommonGuestAgentStrategy |
(String) Class that implements datastore-specific Guest Agent API logic. |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = mysql, information_schema, performance_schema |
(List) Databases to exclude when listing databases. |
ignore_users = os_admin, root |
(List) Users to exclude when listing users. |
min_cluster_member_count = 3 |
(Integer) Minimum number of members in MariaDB cluster. |
mount_point = /var/lib/mysql |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_namespace = trove.guestagent.strategies.replication.experimental.mariadb_gtid |
(String) Namespace to load replication strategies from. |
replication_strategy = MariaDBGTIDReplication |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.mariadb_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.common.service.DefaultRootController |
(String) Root controller implementation for mysql. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field. |
taskmanager_strategy = trove.common.strategies.cluster.experimental.galera_common.taskmanager.GaleraCommonTaskManagerStrategy |
(String) Class that implements datastore-specific task manager logic. |
tcp_ports = 3306, 4444, 4567, 4568 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
usage_timeout = 400 |
(Integer) Maximum time (in seconds) to wait for a Guest to become active. |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[mongodb] | |
add_members_timeout = 300 |
(Integer) Maximum time to wait (in seconds) for a replica set initialization process to complete. |
api_strategy = trove.common.strategies.cluster.experimental.mongodb.api.MongoDbAPIStrategy |
(String) Class that implements datastore-specific API logic. |
backup_incremental_strategy = {} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.experimental.mongo_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = MongoDump |
(String) Default strategy to perform backups. |
cluster_secure = True |
(Boolean) Create secure clusters. If False then the Role-Based Access Control will be disabled. |
cluster_support = True |
(Boolean) Enable clusters to be created and managed. |
configsvr_port = 27019 |
(Port number) Port for instances running as config servers. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = |
(String) List of Guest Logs to expose for publishing. |
guestagent_strategy = trove.common.strategies.cluster.experimental.mongodb.guestagent.MongoDbGuestAgentStrategy |
(String) Class that implements datastore-specific Guest Agent API logic. |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = admin, local, config |
(List) Databases to exclude when listing databases. |
ignore_users = admin.os_admin, admin.root |
(List) Users to exclude when listing users. |
mongodb_port = 27017 |
(Port number) Port for mongod and mongos instances. |
mount_point = /var/lib/mongodb |
(String) Filesystem path for mounting volumes if volume support is enabled. |
num_config_servers_per_cluster = 3 |
(Integer) The number of config servers to create per cluster. |
num_query_routers_per_cluster = 1 |
(Integer) The number of query routers (mongos) to create per cluster. |
replication_strategy = None |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.mongo_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.mongodb.service.MongoDBRootController |
(String) Root controller implementation for mongodb. |
taskmanager_strategy = trove.common.strategies.cluster.experimental.mongodb.taskmanager.MongoDbTaskManagerStrategy |
(String) Class that implements datastore-specific task manager logic. |
tcp_ports = 2500, 27017, 27019 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[mysql] | |
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.mysql_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = InnoBackupEx |
(String) Default strategy to perform backups. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = general,slow_query |
(String) List of Guest Logs to expose for publishing. |
guest_log_long_query_time = 1000 |
(Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = mysql, information_schema, performance_schema |
(List) Databases to exclude when listing databases. |
ignore_users = os_admin, root |
(List) Users to exclude when listing users. |
mount_point = /var/lib/mysql |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_namespace = trove.guestagent.strategies.replication.mysql_gtid |
(String) Namespace to load replication strategies from. |
replication_strategy = MysqlGTIDReplication |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.mysql_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.mysql.service.MySQLRootController |
(String) Root controller implementation for mysql. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field. |
tcp_ports = 3306 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
usage_timeout = 400 |
(Integer) Maximum time (in seconds) to wait for a Guest to become active. |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[pxc] | |
api_strategy = trove.common.strategies.cluster.experimental.galera_common.api.GaleraCommonAPIStrategy |
(String) Class that implements datastore-specific API logic. |
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.mysql_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = InnoBackupEx |
(String) Default strategy to perform backups. |
cluster_support = True |
(Boolean) Enable clusters to be created and managed. |
default_password_length = ${mysql.default_password_length} |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = general,slow_query |
(String) List of Guest Logs to expose for publishing. |
guest_log_long_query_time = 1000 |
(Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time |
guestagent_strategy = trove.common.strategies.cluster.experimental.galera_common.guestagent.GaleraCommonGuestAgentStrategy |
(String) Class that implements datastore-specific Guest Agent API logic. |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = mysql, information_schema, performance_schema |
(List) Databases to exclude when listing databases. |
ignore_users = os_admin, root, clusterrepuser |
(List) Users to exclude when listing users. |
min_cluster_member_count = 3 |
(Integer) Minimum number of members in PXC cluster. |
mount_point = /var/lib/mysql |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_namespace = trove.guestagent.strategies.replication.mysql_gtid |
(String) Namespace to load replication strategies from. |
replication_strategy = MysqlGTIDReplication |
(String) Default strategy for replication. |
replication_user = slave_user |
(String) Userid for replication slave. |
restore_namespace = trove.guestagent.strategies.restore.mysql_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.pxc.service.PxcRootController |
(String) Root controller implementation for pxc. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field. |
taskmanager_strategy = trove.common.strategies.cluster.experimental.galera_common.taskmanager.GaleraCommonTaskManagerStrategy |
(String) Class that implements datastore-specific task manager logic. |
tcp_ports = 3306, 4444, 4567, 4568 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
usage_timeout = 450 |
(Integer) Maximum time (in seconds) to wait for a Guest to become active. |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[percona] | |
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental backup, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.mysql_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = InnoBackupEx |
(String) Default strategy to perform backups. |
default_password_length = ${mysql.default_password_length} |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = general,slow_query |
(String) List of Guest Logs to expose for publishing. |
guest_log_long_query_time = 1000 |
(Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the slow_query log. Will be replaced by a configuration group option: long_query_time |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = mysql, information_schema, performance_schema |
(List) Databases to exclude when listing databases. |
ignore_users = os_admin, root |
(List) Users to exclude when listing users. |
mount_point = /var/lib/mysql |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_namespace = trove.guestagent.strategies.replication.mysql_gtid |
(String) Namespace to load replication strategies from. |
replication_password = NETOU7897NNLOU |
(String) Password for replication slave user. |
replication_strategy = MysqlGTIDReplication |
(String) Default strategy for replication. |
replication_user = slave_user |
(String) Userid for replication slave. |
restore_namespace = trove.guestagent.strategies.restore.mysql_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.common.service.DefaultRootController |
(String) Root controller implementation for percona. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field. |
tcp_ports = 3306 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
usage_timeout = 450 |
(Integer) Maximum time (in seconds) to wait for a Guest to become active. |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[postgresql] | |
backup_incremental_strategy = {'PgBaseBackup': 'PgBaseBackupIncremental'} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.experimental.postgresql_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = PgBaseBackup |
(String) Default strategy to perform backups. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) No help text available for this option. |
guest_log_exposed_logs = general |
(String) List of Guest Logs to expose for publishing. |
guest_log_long_query_time = 0 |
(Integer) DEPRECATED: The time in milliseconds that a statement must take in in order to be logged in the ‘general’ log. A value of ‘0’ logs all statements, while ‘-1’ turns off statement logging. Will be replaced by configuration group option: log_min_duration_statement |
icmp = False |
(Boolean) Whether to permit ICMP. |
ignore_dbs = os_admin, postgres |
(List) No help text available for this option. |
ignore_users = os_admin, postgres, root |
(List) No help text available for this option. |
mount_point = /var/lib/postgresql |
(String) Filesystem path for mounting volumes if volume support is enabled. |
postgresql_port = 5432 |
(Port number) The TCP port the server listens on. |
replication_namespace = trove.guestagent.strategies.replication.experimental.postgresql_impl |
(String) Namespace to load replication strategies from. |
replication_strategy = PostgresqlReplicationStreaming |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.postgresql_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.postgresql.service.PostgreSQLRootController |
(String) Root controller implementation for postgresql. |
root_on_create = False |
(Boolean) Enable the automatic creation of the root user for the service during instance-create. The generated password for the root user is immediately returned in the response of instance-create as the ‘password’ field. |
tcp_ports = 5432 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
wal_archive_location = /mnt/wal_archive |
(String) Filesystem path storing WAL archive files when WAL-shipping based backups or replication is enabled. |
Configuration option = Default value | Description |
---|---|
[redis] | |
api_strategy = trove.common.strategies.cluster.experimental.redis.api.RedisAPIStrategy |
(String) Class that implements datastore-specific API logic. |
backup_incremental_strategy = {} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup. |
backup_namespace = trove.guestagent.strategies.backup.experimental.redis_impl |
(String) Namespace to load backup strategies from. |
backup_strategy = RedisBackup |
(String) Default strategy to perform backups. |
cluster_support = True |
(Boolean) Enable clusters to be created and managed. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = |
(String) List of Guest Logs to expose for publishing. |
guestagent_strategy = trove.common.strategies.cluster.experimental.redis.guestagent.RedisGuestAgentStrategy |
(String) Class that implements datastore-specific Guest Agent API logic. |
icmp = False |
(Boolean) Whether to permit ICMP. |
mount_point = /var/lib/redis |
(String) Filesystem path for mounting volumes if volume support is enabled. |
replication_namespace = trove.guestagent.strategies.replication.experimental.redis_sync |
(String) Namespace to load replication strategies from. |
replication_strategy = RedisSyncReplication |
(String) Default strategy for replication. |
restore_namespace = trove.guestagent.strategies.restore.experimental.redis_impl |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.common.service.DefaultRootController |
(String) Root controller implementation for redis. |
taskmanager_strategy = trove.common.strategies.cluster.experimental.redis.taskmanager.RedisTaskManagerStrategy |
(String) Class that implements datastore-specific task manager logic. |
tcp_ports = 6379, 16379 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
Configuration option = Default value | Description |
---|---|
[vertica] | |
api_strategy = trove.common.strategies.cluster.experimental.vertica.api.VerticaAPIStrategy |
(String) Class that implements datastore-specific API logic. |
backup_incremental_strategy = {} |
(Dict) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup. |
backup_namespace = None |
(String) Namespace to load backup strategies from. |
backup_strategy = None |
(String) Default strategy to perform backups. |
cluster_member_count = 3 |
(Integer) Number of members in Vertica cluster. |
cluster_support = True |
(Boolean) Enable clusters to be created and managed. |
default_password_length = 36 |
(Integer) Character length of generated passwords. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
guest_log_exposed_logs = |
(String) List of Guest Logs to expose for publishing. |
guestagent_strategy = trove.common.strategies.cluster.experimental.vertica.guestagent.VerticaGuestAgentStrategy |
(String) Class that implements datastore-specific Guest Agent API logic. |
icmp = False |
(Boolean) Whether to permit ICMP. |
min_ksafety = 0 |
(Integer) Minimum k-safety setting permitted for vertica clusters |
mount_point = /var/lib/vertica |
(String) Filesystem path for mounting volumes if volume support is enabled. |
readahead_size = 2048 |
(Integer) Size(MB) to be set as readahead_size for data volume |
replication_strategy = None |
(String) Default strategy for replication. |
restore_namespace = None |
(String) Namespace to load restore strategies from. |
root_controller = trove.extensions.vertica.service.VerticaRootController |
(String) Root controller implementation for Vertica. |
taskmanager_strategy = trove.common.strategies.cluster.experimental.vertica.taskmanager.VerticaTaskManagerStrategy |
(String) Class that implements datastore-specific task manager logic. |
tcp_ports = 5433, 5434, 22, 5444, 5450, 4803 |
(List) List of TCP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
udp_ports = 5433, 4803, 4804, 6453 |
(List) List of UDP ports and/or port ranges to open in the security group (only applicable if trove_security_groups_support is True). |
volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
The corresponding log file of each Database service is stored in the
/var/log/trove/
directory of the host on which each service runs.
Log filename | Service that logs to the file |
---|---|
trove-api.log |
Database service API Service |
trove-conductor.log |
Database service Conductor Service |
'logfile.txt' |
Database service guestagent Service |
trove-taskmanager.log |
Database service taskmanager Service |
Option = default value | (Type) Help string |
---|---|
[cassandra] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[cassandra] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[cassandra] system_log_level = INFO |
(StrOpt) Cassandra log verbosity. |
[couchbase] default_password_length = 24 |
(IntOpt) Character length of generated passwords. |
[couchbase] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[couchdb] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[couchdb] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[db2] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[db2] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[mariadb] default_password_length = ${mysql.default_password_length} |
(IntOpt) Character length of generated passwords. |
[mariadb] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[mongodb] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[mongodb] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[mysql] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[mysql] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[percona] default_password_length = ${mysql.default_password_length} |
(IntOpt) Character length of generated passwords. |
[percona] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[postgresql] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[postgresql] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[postgresql] replication_namespace = trove.guestagent.strategies.replication.experimental.postgresql_impl |
(StrOpt) Namespace to load replication strategies from. |
[postgresql] replication_strategy = PostgresqlReplicationStreaming |
(StrOpt) Default strategy for replication. |
[postgresql] wal_archive_location = /mnt/wal_archive |
(StrOpt) Filesystem path storing WAL archive files when WAL-shipping based backups or replication is enabled. |
[pxc] default_password_length = ${mysql.default_password_length} |
(IntOpt) Character length of generated passwords. |
[pxc] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[redis] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[redis] icmp = False |
(BoolOpt) Whether to permit ICMP. |
[vertica] default_password_length = 36 |
(IntOpt) Character length of generated passwords. |
[vertica] icmp = False |
(BoolOpt) Whether to permit ICMP. |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] agent_call_high_timeout |
60 |
600 |
[DEFAULT] agent_call_low_timeout |
5 |
15 |
[DEFAULT] dns_auth_url |
http://0.0.0.0 |
|
[DEFAULT] dns_endpoint_url |
0.0.0.0 |
http://0.0.0.0 |
[DEFAULT] dns_hostname |
localhost |
|
[DEFAULT] dns_management_base_url |
http://0.0.0.0 |
|
[DEFAULT] max_accepted_volume_size |
5 |
10 |
[DEFAULT] max_instances_per_tenant |
5 |
10 |
[DEFAULT] max_volumes_per_tenant |
20 |
40 |
[DEFAULT] module_types |
ping |
ping, new_relic_license |
[DEFAULT] resize_time_out |
600 |
900 |
[DEFAULT] state_change_wait_time |
180 |
600 |
[DEFAULT] usage_timeout |
900 |
1800 |
[cassandra] guest_log_exposed_logs |
system |
|
[db2] backup_strategy |
DB2Backup |
DB2OfflineBackup |
[mariadb] backup_incremental_strategy |
{'InnoBackupEx': 'InnoBackupExIncremental'} |
{'MariaDBInnoBackupEx': 'MariaDBInnoBackupExIncremental'} |
[mariadb] backup_namespace |
trove.guestagent.strategies.backup.mysql_impl |
trove.guestagent.strategies.backup.experimental.mariadb_impl |
[mariadb] backup_strategy |
InnoBackupEx |
MariaDBInnoBackupEx |
[mariadb] restore_namespace |
trove.guestagent.strategies.restore.mysql_impl |
trove.guestagent.strategies.restore.experimental.mariadb_impl |
[mongodb] root_controller |
trove.extensions.common.service.DefaultRootController |
trove.extensions.mongodb.service.MongoDBRootController |
[mongodb] tcp_ports |
2500, 27017 |
2500, 27017, 27019 |
[postgresql] backup_incremental_strategy |
{} |
{'PgBaseBackup': 'PgBaseBackupIncremental'} |
[postgresql] backup_strategy |
PgDump |
PgBaseBackup |
[postgresql] ignore_dbs |
postgres |
os_admin, postgres |
[postgresql] root_controller |
trove.extensions.common.service.DefaultRootController |
trove.extensions.postgresql.service.PostgreSQLRootController |
Deprecated option | New Option |
---|---|
[DEFAULT] default_password_length |
[couchbase] default_password_length |
[DEFAULT] default_password_length |
[redis] default_password_length |
[DEFAULT] default_password_length |
[cassandra] default_password_length |
[DEFAULT] default_password_length |
[mysql] default_password_length |
[DEFAULT] default_password_length |
[mariadb] default_password_length |
[DEFAULT] default_password_length |
[postgresql] default_password_length |
[DEFAULT] default_password_length |
[vertica] default_password_length |
[DEFAULT] default_password_length |
[pxc] default_password_length |
[DEFAULT] default_password_length |
[percona] default_password_length |
[DEFAULT] default_password_length |
[mongodb] default_password_length |
[DEFAULT] default_password_length |
[db2] default_password_length |
[DEFAULT] default_password_length |
[couchdb] default_password_length |
[DEFAULT] use_syslog |
None |
The Database service provides a scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines.
The following tables provide a comprehensive list of the Database service configuration options.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
admin_roles = admin |
(List) Roles to add to an admin user. |
api_paste_config = api-paste.ini |
(String) File name for the paste.deploy config for trove-api. |
bind_host = 0.0.0.0 |
(IP) IP address the API server will listen on. |
bind_port = 8779 |
(Port number) Port the API server will listen on. |
black_list_regex = None |
(String) Exclude IP addresses that match this regular expression. |
db_api_implementation = trove.db.sqlalchemy.api |
(String) API Implementation for Trove database access. |
hostname_require_valid_ip = True |
(Boolean) Require user hostnames to be valid IP addresses. |
http_delete_rate = 200 |
(Integer) Maximum number of HTTP ‘DELETE’ requests (per minute). |
http_get_rate = 200 |
(Integer) Maximum number of HTTP ‘GET’ requests (per minute). |
http_mgmt_post_rate = 200 |
(Integer) Maximum number of management HTTP ‘POST’ requests (per minute). |
http_post_rate = 200 |
(Integer) Maximum number of HTTP ‘POST’ requests (per minute). |
http_put_rate = 200 |
(Integer) Maximum number of HTTP ‘PUT’ requests (per minute). |
injected_config_location = /etc/trove/conf.d |
(String) Path to folder on the Guest where config files will be injected during instance creation. |
instances_page_size = 20 |
(Integer) Page size for listing instances. |
max_header_line = 16384 |
(Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). |
os_region_name = RegionOne |
(String) Region name of this node. Used when searching catalog. |
region = LOCAL_DEV |
(String) The region this service is located. |
tcp_keepidle = 600 |
(Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. |
trove_api_workers = None |
(Integer) Number of workers for the API service. The default will be the number of CPUs available. |
trove_auth_url = http://0.0.0.0:5000/v2.0 |
(URI) Trove authentication URL. |
trove_conductor_workers = None |
(Integer) Number of workers for the Conductor service. The default will be the number of CPUs available. |
trove_security_group_name_prefix = SecGroup |
(String) Prefix to use when creating Security Groups. |
trove_security_group_rule_cidr = 0.0.0.0/0 |
(String) CIDR to use when creating Security Group Rules. |
trove_security_groups_support = True |
(Boolean) Whether Trove should add Security Groups on create. |
users_page_size = 20 |
(Integer) Page size for listing users. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backup_aes_cbc_key = default_aes_cbc_key |
(String) Default OpenSSL aes_cbc key. |
backup_chunk_size = 65536 |
(Integer) Chunk size (in bytes) to stream to the Swift container. This should be in multiples of 128 bytes, since this is the size of an md5 digest block allowing the process to update the file checksum during streaming. See: http://stackoverflow.com/questions/1131220/ |
backup_runner = trove.guestagent.backup.backup_types.InnoBackupEx |
(String) Runner to use for backups. |
backup_runner_options = {} |
(Dict) Additional options to be passed to the backup runner. |
backup_segment_max_size = 2147483648 |
(Integer) Maximum size (in bytes) of each segment of the backup file. |
backup_swift_container = database_backups |
(String) Swift container to put backups in. |
backup_use_gzip_compression = True |
(Boolean) Compress backups using gzip. |
backup_use_openssl_encryption = True |
(Boolean) Encrypt backups using OpenSSL. |
backup_use_snet = False |
(Boolean) Send backup files over snet. |
backups_page_size = 20 |
(Integer) Page size for listing backups. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
remote_cinder_client = trove.common.remote.cinder_client |
(String) Client to send Cinder calls to. |
remote_dns_client = trove.common.remote.dns_client |
(String) Client to send DNS calls to. |
remote_guest_client = trove.common.remote.guest_client |
(String) Client to send Guest Agent calls to. |
remote_heat_client = trove.common.remote.heat_client |
(String) Client to send Heat calls to. |
remote_neutron_client = trove.common.remote.neutron_client |
(String) Client to send Neutron calls to. |
remote_nova_client = trove.common.remote.nova_client |
(String) Client to send Nova calls to. |
remote_swift_client = trove.common.remote.swift_client |
(String) Client to send Swift calls to. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cluster_delete_time_out = 180 |
(Integer) Maximum time (in seconds) to wait for a cluster delete. |
cluster_usage_timeout = 36000 |
(Integer) Maximum time (in seconds) to wait for a cluster to become active. |
clusters_page_size = 20 |
(Integer) Page size for listing clusters. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
configurations_page_size = 20 |
(Integer) Page size for listing configurations. |
databases_page_size = 20 |
(Integer) Page size for listing databases. |
default_datastore = None |
(String) The default datastore id or name to use if one is not provided by the user. If the default value is None, the field becomes required in the instance create request. |
default_neutron_networks = |
(List) List of IDs for management networks which should be attached to the instance regardless of what NICs are specified in the create API call. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
expected_filetype_suffixes = json |
(List) Filetype endings not to be reattached to an ID by the utils method correct_id_with_req. |
format_options = -m 5 |
(String) Options to use when formatting a volume. |
host = 0.0.0.0 |
(IP) Host to listen for RPC messages. |
module_aes_cbc_key = module_aes_cbc_key |
(String) OpenSSL aes_cbc key for module encryption. |
module_types = ping, new_relic_license |
(List) A list of module types supported. A module type corresponds to the name of a ModuleDriver. |
modules_page_size = 20 |
(Integer) Page size for listing modules. |
network_label_regex = ^private$ |
(String) Regular expression to match Trove network labels. |
notification_service_id = {'mongodb': 'c8c907af-7375-456f-b929-b637ff9209ee', 'percona': 'fd1723f5-68d2-409c-994f-a4a197892a17', 'mysql': '2f3ff068-2bfb-4f70-9a9d-a6bb65bc084b', 'pxc': '75a628c3-f81b-4ffb-b10a-4087c26bc854', 'db2': 'e040cd37-263d-4869-aaa6-c62aa97523b5', 'cassandra': '459a230d-4e97-4344-9067-2a54a310b0ed', 'mariadb': '7a4f82cc-10d2-4bc6-aadc-d9aacc2a3cb5', 'postgresql': 'ac277e0d-4f21-40aa-b347-1ea31e571720', 'couchbase': 'fa62fe68-74d9-4779-a24e-36f19602c415', 'couchdb': 'f0a9ab7b-66f7-4352-93d7-071521d44c7c', 'redis': 'b216ffc5-1947-456c-a4cf-70f94c05f7d0', 'vertica': 'a8d805ae-a3b2-c4fd-gb23-b62cee5201ae'} |
(Dict) Unique ID to tag notification events. |
num_tries = 3 |
(Integer) Number of times to check if a volume exists. |
pybasedir = /usr/lib/python/site-packages/trove/trove |
(String) Directory where the Trove python module is installed. |
pydev_path = None |
(String) Set path to pydevd library, used if pydevd is not found in python sys.path. |
quota_notification_interval = 3600 |
(Integer) Seconds to wait between pushing events. |
report_interval = 30 |
(Integer) The interval (in seconds) which periodic tasks are run. |
sql_query_logging = False |
(Boolean) Allow insecure logging while executing queries through SQLAlchemy. |
taskmanager_queue = taskmanager |
(String) Message queue name the Taskmanager will listen to. |
template_path = /etc/trove/templates/ |
(String) Path which leads to datastore templates. |
timeout_wait_for_service = 120 |
(Integer) Maximum time (in seconds) to wait for a service to become alive. |
usage_timeout = 1800 |
(Integer) Maximum time (in seconds) to wait for a Guest to become active. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
ip_regex = None |
(String) List IP addresses that match this regular expression. |
nova_client_version = 2.12 |
(String) The version of the compute service client. |
nova_compute_endpoint_type = publicURL |
(String) Service endpoint type to use when searching catalog. |
nova_compute_service_type = compute |
(String) Service type to use when searching catalog. |
nova_compute_url = None |
(URI) URL without the tenant segment. |
root_grant = ALL |
(List) Permissions to grant to the ‘root’ user. |
root_grant_option = True |
(Boolean) Assign the ‘root’ user GRANT permissions. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
backlog = 4096 |
(Integer) Number of backlog requests to configure the socket with |
pydev_debug = disabled |
(String) Enable or disable pydev remote debugging. If value is ‘auto’ tries to connect to remote debugger server, but in case of error continues running with debugging disabled. |
pydev_debug_host = None |
(String) Pydev debug server host (localhost by default). |
pydev_debug_port = 5678 |
(Port number) Pydev debug server port (5678 by default). |
[profiler] | |
connection_string = messaging:// |
(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values:
|
enabled = False |
(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values:
|
hmac_keys = SECRET_KEY |
(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
trace_sqlalchemy = False |
(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values:
|
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dns_account_id = |
(String) Tenant ID for DNSaaS. |
dns_auth_url = http://0.0.0.0 |
(URI) Authentication URL for DNSaaS. |
dns_domain_id = |
(String) Domain ID used for adding DNS entries. |
dns_domain_name = |
(String) Domain name used for adding DNS entries. |
dns_driver = trove.dns.driver.DnsDriver |
(String) Driver for DNSaaS. |
dns_endpoint_url = http://0.0.0.0 |
(URI) Endpoint URL for DNSaaS. |
dns_hostname = localhost |
(Hostname) Hostname used for adding DNS entries. |
dns_instance_entry_factory = trove.dns.driver.DnsInstanceEntryFactory |
(String) Factory for adding DNS entries. |
dns_management_base_url = http://0.0.0.0 |
(URI) Management URL for DNSaaS. |
dns_passkey = |
(String) Passkey for DNSaaS. |
dns_region = |
(String) Region name for DNSaaS. |
dns_service_type = |
(String) Service Type for DNSaaS. |
dns_time_out = 120 |
(Integer) Maximum time (in seconds) to wait for a DNS entry add. |
dns_ttl = 300 |
(Integer) Time (in seconds) before a refresh of DNS information occurs. |
dns_username = |
(String) Username for DNSaaS. |
trove_dns_support = False |
(Boolean) Whether Trove should add DNS entries on create (using Designate DNSaaS). |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
agent_call_high_timeout = 600 |
(Integer) Maximum time (in seconds) to wait for Guest Agent ‘slow’ requests (such as restarting the database). |
agent_call_low_timeout = 15 |
(Integer) Maximum time (in seconds) to wait for Guest Agent ‘quick’requests (such as retrieving a list of users or databases). |
agent_heartbeat_expiry = 60 |
(Integer) Time (in seconds) after which a guest is considered unreachable |
agent_heartbeat_time = 10 |
(Integer) Maximum time (in seconds) for the Guest Agent to reply to a heartbeat request. |
agent_replication_snapshot_timeout = 36000 |
(Integer) Maximum time (in seconds) to wait for taking a Guest Agent replication snapshot. |
guest_config = /etc/trove/trove-guestagent.conf |
(String) Path to the Guest Agent config file to be injected during instance creation. |
guest_id = None |
(String) ID of the Guest Instance. |
guest_info = guest_info.conf |
(String) The guest info filename found in the injected config location. If a full path is specified then it will be used as the path to the guest info file |
guest_log_container_name = database_logs |
(String) Name of container that stores guest log components. |
guest_log_expiry = 2592000 |
(Integer) Expiry (in seconds) of objects in guest log container. |
guest_log_limit = 1000000 |
(Integer) Maximum size of a chunk saved in guest log container. |
mount_options = defaults,noatime |
(String) Options to use when mounting a volume. |
storage_namespace = trove.common.strategies.storage.swift |
(String) Namespace to load the default storage strategy from. |
storage_strategy = SwiftStorage |
(String) Default strategy to store backups. |
usage_sleep_time = 5 |
(Integer) Time to sleep during the check for an active Guest. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
heat_endpoint_type = publicURL |
(String) Service endpoint type to use when searching catalog. |
heat_service_type = orchestration |
(String) Service type to use when searching catalog. |
heat_time_out = 60 |
(Integer) Maximum time (in seconds) to wait for a Heat request to complete. |
heat_url = None |
(URI) URL without the tenant segment. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
network_driver = trove.network.nova.NovaNetwork |
(String) Describes the actual network manager used for the management of network attributes (security groups, floating IPs, etc.). |
neutron_endpoint_type = publicURL |
(String) Service endpoint type to use when searching catalog. |
neutron_service_type = network |
(String) Service type to use when searching catalog. |
neutron_url = None |
(URI) URL without the tenant segment. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nova_proxy_admin_pass = |
(String) Admin password used to connect to Nova. |
nova_proxy_admin_tenant_id = |
(String) Admin tenant ID used to connect to Nova. |
nova_proxy_admin_tenant_name = |
(String) Admin tenant name used to connect to Nova. |
nova_proxy_admin_user = |
(String) Admin username used to connect to Nova. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
max_accepted_volume_size = 10 |
(Integer) Default maximum volume size (in GB) for an instance. |
max_backups_per_tenant = 50 |
(Integer) Default maximum number of backups created by a tenant. |
max_instances_per_tenant = 10 |
(Integer) Default maximum number of instances per tenant. |
max_volumes_per_tenant = 40 |
(Integer) Default maximum volume capacity (in GB) spanning across all Trove volumes per tenant. |
quota_driver = trove.quota.quota.DbQuotaDriver |
(String) Default driver to use for quota checks. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
swift_endpoint_type = publicURL |
(String) Service endpoint type to use when searching catalog. |
swift_service_type = object-store |
(String) Service type to use when searching catalog. |
swift_url = None |
(URI) URL ending in AUTH_ . |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cloudinit_location = /etc/trove/cloudinit |
(String) Path to folder with cloudinit scripts. |
datastore_manager = None |
(String) Manager class in the Guest Agent, set up by the Taskmanager on instance provision. |
datastore_registry_ext = {} |
(Dict) Extension for default datastore managers. Allows the use of custom managers for each of the datastores supported by Trove. |
exists_notification_interval = 3600 |
(Integer) Seconds to wait between pushing events. |
exists_notification_transformer = None |
(String) Transformer for exists notifications. |
reboot_time_out = 120 |
(Integer) Maximum time (in seconds) to wait for a server reboot. |
resize_time_out = 900 |
(Integer) Maximum time (in seconds) to wait for a server resize. |
restore_usage_timeout = 36000 |
(Integer) Maximum time (in seconds) to wait for a Guest instance restored from a backup to become active. |
revert_time_out = 600 |
(Integer) Maximum time (in seconds) to wait for a server resize revert. |
server_delete_time_out = 60 |
(Integer) Maximum time (in seconds) to wait for a server delete. |
state_change_poll_time = 3 |
(Integer) Interval between state change poll requests (seconds). |
state_change_wait_time = 600 |
(Integer) Maximum time (in seconds) to wait for a state change. |
update_status_on_fail = True |
(Boolean) Set the service and instance task statuses to ERROR when an instance fails to become active within the configured usage_timeout. |
usage_sleep_time = 5 |
(Integer) Time to sleep during the check for an active Guest. |
use_heat = False |
(Boolean) Use Heat for provisioning. |
use_nova_server_config_drive = True |
(Boolean) Use config drive for file injection when booting instance. |
use_nova_server_volume = False |
(Boolean) Whether to provision a Cinder volume for the Nova instance. |
verify_swift_checksum_on_restore = True |
(Boolean) Enable verification of Swift checksum before starting restore. Makes sure the checksum of original backup matches the checksum of the Swift backup file. |
Configuration option = Default value | Description |
---|---|
[upgrade_levels] | |
conductor = icehouse |
(String) Set a version cap for messages sent to conductor services |
guestagent = icehouse |
(String) Set a version cap for messages sent to guestagent services |
taskmanager = icehouse |
(String) Set a version cap for messages sent to taskmanager services |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
block_device_mapping = vdb |
(String) Block device to map onto the created instance. |
cinder_endpoint_type = publicURL |
(String) Service endpoint type to use when searching catalog. |
cinder_service_type = volumev2 |
(String) Service type to use when searching catalog. |
cinder_url = None |
(URI) URL without the tenant segment. |
cinder_volume_type = None |
(String) Volume type to use when provisioning a Cinder volume. |
device_path = /dev/vdb |
(String) Device path for volume if volume support is enabled. |
trove_volume_support = True |
(Boolean) Whether to provision a Cinder volume for datadir. |
volume_format_timeout = 120 |
(Integer) Maximum time (in seconds) to wait for a volume format. |
volume_fstype = ext3 |
(String) File system type used to format a volume. |
volume_time_out = 60 |
(Integer) Maximum time (in seconds) to wait for a volume attach. |
The Identity API can be configured by changing the following options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
admin_endpoint = None |
(String) The base admin endpoint URL for Keystone that is advertised to clients (NOTE: this does NOT affect how Keystone listens for connections). Defaults to the base host URL of the request. For example, if keystone receives a request to http://server:35357/v3/users, then this will option will be automatically treated as http://server:35357. You should only need to set option if either the value of the base URL contains a path that keystone does not automatically infer (/prefix/v3), or if the endpoint should be found on a different host. |
admin_token = None |
(String) Using this feature is NOT recommended. Instead, use the keystone-manage bootstrap command. The value of this option is treated as a “shared secret” that can be used to bootstrap Keystone through the API. This “token” does not represent a user (it has no identity), and carries no explicit authorization (it effectively bypasses most authorization checks). If set to None, the value is ignored and the admin_token middleware is effectively disabled. However, to completely disable admin_token in production (highly recommended, as it presents a security risk), remove AdminTokenAuthMiddleware (the admin_token_auth filter) from your paste application pipelines (for example, in keystone-paste.ini). |
domain_id_immutable = True |
(Boolean) DEPRECATED: Set this to false if you want to enable the ability for user, group and project entities to be moved between domains by updating their domain_id attribute. Allowing such movement is not recommended if the scope of a domain admin is being restricted by use of an appropriate policy file (see etc/policy.v3cloudsample.json as an example). This feature is deprecated and will be removed in a future release, in favor of strictly immutable domain IDs. The option to set domain_id_immutable to false has been deprecated in the M release and will be removed in the O release. |
list_limit = None |
(Integer) The maximum number of entities that will be returned in a collection. This global limit may be then overridden for a specific driver, by specifying a list_limit in the appropriate section (for example, [assignment]). No limit is set by default. In larger deployments, it is recommended that you set this to a reasonable number to prevent operations like listing all users and projects from placing an unnecessary load on the system. |
max_param_size = 64 |
(Integer) Limit the sizes of user & project ID/names. |
max_project_tree_depth = 5 |
(Integer) Maximum depth of the project hierarchy, excluding the project acting as a domain at the top of the hierarchy. WARNING: Setting it to a large value may adversely impact performance. |
max_token_size = 8192 |
(Integer) Similar to [DEFAULT] max_param_size, but provides an exception for token values. With PKI / PKIZ tokens, this needs to be set close to 8192 (any higher, and other HTTP implementations may break), depending on the size of your service catalog and other factors. With Fernet tokens, this can be set as low as 255. With UUID tokens, this should be set to 32). |
member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab |
(String) Similar to the [DEFAULT] member_role_name option, this represents the default role ID used to associate users with their default projects in the v2 API. This will be used as the explicit role where one is not specified by the v2 API. You do not need to set this value unless you want keystone to use an existing role with a different ID, other than the arbitrarily defined _member_ role (in which case, you should set [DEFAULT] member_role_name as well). |
member_role_name = _member_ |
(String) This is the role name used in combination with the [DEFAULT] member_role_id option; see that option for more detail. You do not need to set this option unless you want keystone to use an existing role (in which case, you should set [DEFAULT] member_role_id as well). |
public_endpoint = None |
(String) The base public endpoint URL for Keystone that is advertised to clients (NOTE: this does NOT affect how Keystone listens for connections). Defaults to the base host URL of the request. For example, if keystone receives a request to http://server:5000/v3/users, then this will option will be automatically treated as http://server:5000. You should only need to set option if either the value of the base URL contains a path that keystone does not automatically infer (/prefix/v3), or if the endpoint should be found on a different host. |
secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO |
(String) DEPRECATED: The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. This option has been deprecated in the N release and will be removed in the P release. Use oslo.middleware.http_proxy_to_wsgi configuration instead. |
strict_password_check = False |
(Boolean) If set to true, strict password length checking is performed for password manipulation. If a password exceeds the maximum length, the operation will fail with an HTTP 403 Forbidden error. If set to false, passwords are automatically truncated to the maximum length. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
OpenStack Identity supports customizable token providers. This is specified
in the [token]
section of the configuration file. The token provider
controls the token construction, validation, and revocation operations.
You can register your own token provider by configuring the following property:
Note
More commonly, you can use this option to change the token provider to one of the ones built in. Alternatively, you can use it to configure your own token provider.
provider
- token provider driver.
Defaults to uuid
.
Implemented by keystone.token.providers.uuid.Provider
. This is the
entry point for the token provider in the keystone.token.provider
namespace.Each token format uses different technologies to achieve various performance,
scaling, and architectural requirements. The Identity service includes
fernet
, pkiz
, pki
, and uuid
token providers.
Below is the detailed list of the token formats:
uuid
tokens must be persisted (using the back end specified in the
[token] driver
option), but do not require any extra configuration
or setup.pki
and pkiz
tokens can be validated offline, without making HTTP
calls to keystone. However, this format requires that certificates be
installed and distributed to facilitate signing tokens and later validating
those signatures.fernet
tokens do not need to be persisted at all, but require that you run
keystone-manage fernet_setup
(also see the
keystone-manage fernet_rotate
command).Warning
UUID, PKI, PKIZ, and Fernet tokens are all bearer tokens. They must be protected from unnecessary disclosure to prevent unauthorized access.
You can use federation for the Identity service (keystone) in two ways:
Supporting keystone as a SP: consuming identity assertions issued by an external Identity Provider, such as SAML assertions or OpenID Connect claims.
Supporting keystone as an IdP: fulfilling authentication requests on behalf of Service Providers.
Note
It is also possible to have one keystone act as an SP that consumes Identity from another keystone acting as an IdP.
There is currently support for two major federation protocols:
To enable federation:
Run keystone under Apache. See Configure the Apache HTTP server for more information.
Note
Other application servers, such as nginx, have support for federation extensions that may work but are not tested by the community.
Configure Apache to use a federation capable module. We recommend Shibboleth, see the Shibboleth documentation for more information.
Note
Another option is mod_auth_melon
, see the mod’s github repo
for more information.
Configure federation in keystone.
Note
The external IdP is responsible for authenticating users and communicates the result of authentication to keystone using authentication assertions. Keystone maps these values to keystone user groups and assignments created in keystone.
To have keystone as an SP, you will need to configure keystone to accept assertions from external IdPs. Examples of external IdPs are:
Configure authentication drivers in keystone.conf
by adding the
authentication methods to the [auth]
section in keystone.conf
.
Ensure the names are the same as to the protocol names added via Identity
API v3.
For example:
[auth]
methods = external,password,token,mapped,openid
Note
mapped
and openid
are the federation specific drivers.
The other names in the example are not related to federation.
Create local keystone groups and assign roles.
Important
The keystone requires group-based role assignments to authorize federated users. The federation mapping engine maps federated users into local user groups, which are the actors in keystone’s role assignments.
Create an IdP object in keystone. The object must represent the IdP you will use to authenticate end users:
PUT /OS-FEDERATION/identity_providers/{idp_id}
More configuration information for IdPs can be found http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#register-an-identity-provider.
Add mapping rules:
PUT /OS-FEDERATION/mappings/{mapping_id}
More configuration information for mapping rules can be found http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#create-a-mapping.
Note
The only keystone API objects that support mapping are groups and users.
Add a protocol object and specify the mapping ID you want to use with the combination of the IdP and protocol:
PUT /OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}
More configuration information for protocols can be found http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#add-a-protocol-and-attribute-mapping-to-an-identity-provider.
Authenticate externally and generate an unscoped token in keystone:
Note
Unlike other authentication methods in keystone, the user does
not issue an HTTP POST request with authentication data in the request body.
To start federated authentication a user must access the dedicated URL with
IdP’s and orotocol’s identifiers stored within a protected URL.
The URL has a format of:
/v3/OS-FEDERATION/identity_providers/{idp_id}/protocols/{protocol_id}/auth
.
GET/POST /OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth
Determine accessible resources. By using the previously returned token, the user can issue requests to the list projects and domains that are accessible.
GET /OS-FEDERATION/projects
Get a scoped token. A federated user can request a scoped token using the unscoped token. A project or domain can be specified by either ID or name. An ID is sufficient to uniquely identify a project or domain.
POST /auth/tokens
When acting as an IdP, the primary role of keystone is to issue assertions about users owned by keystone. This is done using PySAML2.
There are certain settings in keystone.conf
that must be set up, prior
to attempting to federate multiple keystone deployments.
Within keystone.conf
, assign values to the [saml]
related fields, for example:
[saml]
certfile=/etc/keystone/ssl/certs/ca.pem
keyfile=/etc/keystone/ssl/private/cakey.pem
idp_entity_id=https://keystone.example.com/v3/OS-FEDERATION/saml2/idp
idp_sso_endpoint=https://keystone.example.com/v3/OS-FEDERATION/saml2/sso
idp_metadata_path=/etc/keystone/saml2_idp_metadata.xml
We recommend the following Organization configuration options. Ensure these values contain not special characters that may cause problems as part of a URL:
idp_organization_name=example_company
idp_organization_display_name=Example Corp.
idp_organization_url=example.com
As with the Organization options, the Contact options are not necessary, but it is advisable to set these values:
idp_contact_company=example_company
idp_contact_name=John
idp_contact_surname=Smith
idp_contact_email=jsmith@example.com
idp_contact_telephone=555-55-5555
idp_contact_type=technical
Metadata must be exchanged to create a trust between the IdP and the SP.
Create metadata for your keystone IdP, run the keystone-manage
command
and pipe the output to a file. For example:
$ keystone-manage saml_idp_metadata > /etc/keystone/saml2_idp_metadata.xml
Note
The file location must match the value of the idp_metadata_path
configuration option assigned previously.
To setup keystone-as-a-Service-Provider properly, you will need to understand what protocols are supported by external IdPs. For example, keystone as an SP can allow identities to federate in from a ADFS IdP but it must be configured to understand the SAML v2.0 protocol. ADFS issues assertions using SAML v2.0. Some examples of federated protocols include:
The following instructions are an example of how you can configure keystone as an SP.
Create a new SP with an ID of BETA.
Create a sp_url
of http://beta.example.com/Shibboleth.sso/SAML2/ECP.
Create a auth_url
of http://beta.example.com:5000/v3/OS-FEDERATION/identity_providers/beta/protocols/saml2/auth.
Note
Use the sp_url
when creating a SAML assertion for BETA and signed by
the current keystone IdP. Use the auth_url
when retrieving the token
for BETA once the SAML assertion is sent.
Set the enabled
field to true
. It is set to
false
by default.
Your output should reflect the following example:
$ curl -s -X PUT \
-H "X-Auth-Token: $OS_TOKEN" \
-H "Content-Type: application/json" \
-d '{"service_provider": {"auth_url": "http://beta.example.com:5000/v3/OS-FEDERATION/identity_providers/beta/protocols/saml2/auth", "sp_url": "https://example.com:5000/Shibboleth.sso/SAML2/ECP", "enabled": true}}' \
http://localhost:5000/v3/OS-FEDERATION/service_providers/BETA | python -mjson.tool
Keystone acting as an IdP is known as k2k or k2k federation, where a keystone somewhere is acting as the SP and another keystone is acting as the IdP. All IdPs issue assertions about the identities it owns using a Protocol.
Mapping adds a set of rules to map federation attributes to keystone users or groups. An IdP has exactly one mapping specified per protocol.
A mapping is a translation between assertions provided from an IdP and the permission and roles applied by an SP. Given an assertion from an IdP, an SP applies a mapping to translate attributes from the IdP to known roles. A mapping is typically owned by an SP.
Mapping objects can be used multiple times by different combinations of IdP and protocol.
A rule hierarchy is as follows:
{
"rules": [
{
"local": [
{
"<user> or <group>"
}
],
"remote": [
{
"<condition>"
}
]
}
]
}
rules
: top-level list of rules.local
: a rule containing information on what local attributes
will be mapped.remote
: a rule containing information on what remote attributes will
be mapped.condition
: contains information on conditions that allow a rule, can
only be set in a remote rule.For more information on mapping rules, see http://docs.openstack.org/developer/keystone/federation/federated_identity.html#mapping-rules.
Mapping creation starts with the communication between the IdP and SP. The IdP usually provides a set of assertions that their users have in their assertion document. The SP will have to map those assertions to known groups and roles. For example:
Identity Provider 1:
name: jsmith
groups: hacker
other: <assertion information>
The Service Provider may have 3 groups:
Admin Group
Developer Group
User Group
The mapping created by the Service Provider might look like:
Local:
Group: Developer Group
Remote:
Groups: hackers
The Developer Group
may have a role assignment on the
Developer Project
. When jsmith authenticates against IdP 1, it
presents that assertion to the SP.The SP maps the jsmith user to the
Developer Group
because the assertion says jsmith is a member of
the hacker
group.
A bare bones mapping is sufficient if you would like all federated users to have the same authorization in the SP cloud. However, mapping is quite powerful and flexible. You can map different remote users into different user groups in keystone, limited only by the number of assertions your IdP makes about each user.
A mapping is composed of a list of rules, and each rule is further composed of a list of remote attributes and a list of local attributes. If a rule is matched, all of the local attributes are applied in the SP. For a rule to match, all of the remote attributes it defines must match.
In the base case, a federated user simply needs an assertion containing an email address to be identified in the SP cloud. To achieve that, only one rule is needed that requires the presence of one remote attribute:
{
"rules": [
{
"remote": [
{
"type": "Email"
}
],
"local": [
{
"user": {
"name": "{0}"
}
}
]
}
]
}
However, that is not particularly useful as the federated user would receive no
authorization. To rectify it, you can map all federated users with email
addresses into a federated-users
group in the default
domain. All
federated users will then be able to consume whatever role assignments that
user group has already received in keystone:
Note
In this example, there is only one rule requiring one remote attribute.
{
"rules": [
{
"remote": [
{
"type": "Email"
}
],
"local": [
{
"user": {
"name": "{0}"
}
},
{
"group": {
"domain": {
"id": "0cd5e9"
},
"name": "federated-users"
}
}
]
}
]
}
This example can be expanded by adding a second rule that conveys
additional authorization to only a subset of federated users. Federated users
with a title attribute that matches either Manager
or
Supervisor
are granted the hypothetical observer
role, which would
allow them to perform any read-only API call in the cloud:
{
"rules": [
{
"remote": [
{
"type": "Email"
},
],
"local": [
{
"user": {
"name": "{0}"
}
},
{
"group": {
"domain": {
"id": "default"
},
"name": "federated-users"
}
}
]
},
{
"remote": [
{
"type": "Title",
"any_one_of": [".*Manager$", "Supervisor"],
"regex": "true"
},
],
"local": [
{
"group": {
"domain": {
"id": "default"
},
"name": "observers"
}
}
]
}
]
}
Note
any_one_of
and regex
in the rule above map federated users into
the observers
group when a user’s Title
assertion matches any of
the regular expressions specified in the any_one_of
attribute.
Keystone also supports the following:
not_any_of
, matches any assertion that does not include one of
the specified valuesblacklist
, matches all assertions of the specified type except
those included in the specified valuewhitelist
does not match any assertion except those listed in the
specified value.The Identity service is configured in the /etc/keystone/keystone.conf
file.
The following tables provide a comprehensive list of the Identity service options.
Configuration option = Default value | Description |
---|---|
[assignment] | |
driver = None |
(String) Entrypoint for the assignment backend driver in the keystone.assignment namespace. Only an SQL driver is supplied. If an assignment driver is not specified, the identity driver will choose the assignment driver (driver selection based on [identity]/driver option is deprecated and will be removed in the “O” release). |
prohibited_implied_role = admin |
(List) A list of role names which are prohibited from being an implied role. |
Configuration option = Default value | Description |
---|---|
[auth] | |
external = None |
(String) Entrypoint for the external (REMOTE_USER) auth plugin module in the keystone.auth.external namespace. Supplied drivers are DefaultDomain and Domain. The default driver is DefaultDomain. |
methods = external, password, token, oauth1 |
(List) Allowed authentication methods. |
oauth1 = None |
(String) Entrypoint for the oAuth1.0 auth plugin module in the keystone.auth.oauth1 namespace. |
password = None |
(String) Entrypoint for the password auth plugin module in the keystone.auth.password namespace. |
token = None |
(String) Entrypoint for the token auth plugin module in the keystone.auth.token namespace. |
Configuration option = Default value | Description |
---|---|
[eventlet_server_ssl] | |
ca_certs = /etc/keystone/ssl/certs/ca.pem |
(String) DEPRECATED: Path of the CA cert file for SSL. |
cert_required = False |
(Boolean) DEPRECATED: Require client certificate. |
certfile = /etc/keystone/ssl/certs/keystone.pem |
(String) DEPRECATED: Path of the certfile for SSL. For non-production environments, you may be interested in using keystone-manage ssl_setup to generate self-signed certificates. |
enable = False |
(Boolean) DEPRECATED: Toggle for SSL support on the Keystone eventlet servers. |
keyfile = /etc/keystone/ssl/private/keystonekey.pem |
(String) DEPRECATED: Path of the keyfile for SSL. |
[signing] | |
ca_certs = /etc/keystone/ssl/certs/ca.pem |
(String) DEPRECATED: Path of the CA for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
ca_key = /etc/keystone/ssl/private/cakey.pem |
(String) DEPRECATED: Path of the CA key for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com |
(String) DEPRECATED: Certificate subject (auto generated certificate) for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
certfile = /etc/keystone/ssl/certs/signing_cert.pem |
(String) DEPRECATED: Path of the certfile for token signing. For non-production environments, you may be interested in using keystone-manage pki_setup to generate self-signed certificates. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
key_size = 2048 |
(Integer) DEPRECATED: Key size (in bits) for token signing cert (auto generated certificate). PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
keyfile = /etc/keystone/ssl/private/signing_key.pem |
(String) DEPRECATED: Path of the keyfile for token signing. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
valid_days = 3650 |
(Integer) DEPRECATED: Days the token signing cert is valid for (auto generated certificate). PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
[ssl] | |
ca_key = /etc/keystone/ssl/private/cakey.pem |
(String) Path of the CA key file for SSL. |
cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=localhost |
(String) SSL certificate subject (auto generated certificate). |
key_size = 1024 |
(Integer) SSL key length (in bits) (auto generated certificate). |
valid_days = 3650 |
(Integer) Days the certificate is valid for once signed (auto generated certificate). |
Configuration option = Default value | Description |
---|---|
[catalog] | |
cache_time = None |
(Integer) Time to cache catalog data (in seconds). This has no effect unless global and catalog caching are enabled. |
caching = True |
(Boolean) Toggle for catalog caching. This has no effect unless global caching is enabled. |
driver = sql |
(String) Entrypoint for the catalog backend driver in the keystone.catalog namespace. Supplied drivers are kvs, sql, templated, and endpoint_filter.sql |
list_limit = None |
(Integer) Maximum number of entities that will be returned in a catalog collection. |
template_file = default_catalog.templates |
(String) Catalog template file name for use with the template catalog backend. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
insecure_debug = False |
(Boolean) If set to true, then the server will return information in HTTP responses that may allow an unauthenticated or authenticated user to get more information than normal, such as additional details about why authentication failed. This may be useful for debugging but is insecure. |
Configuration option = Default value | Description |
---|---|
[credential] | |
driver = sql |
(String) Entrypoint for the credential backend driver in the keystone.credential namespace. |
Configuration option = Default value | Description |
---|---|
[audit] | |
namespace = openstack |
(String) namespace prefix for generated id |
Configuration option = Default value | Description |
---|---|
[domain_config] | |
cache_time = 300 |
(Integer) TTL (in seconds) to cache domain config data. This has no effect unless domain config caching is enabled. |
caching = True |
(Boolean) Toggle for domain config caching. This has no effect unless global caching is enabled. |
driver = sql |
(String) Entrypoint for the domain config backend driver in the keystone.resource.domain_config namespace. |
Configuration option = Default value | Description |
---|---|
[federation] | |
assertion_prefix = |
(String) Value to be used when filtering assertion parameters from the environment. |
driver = sql |
(String) Entrypoint for the federation backend driver in the keystone.federation namespace. |
federated_domain_name = Federated |
(String) A domain name that is reserved to allow federated ephemeral users to have a domain concept. Note that an admin will not be able to create a domain with this name or update an existing domain to this name. You are not advised to change this value unless you really have to. |
remote_id_attribute = None |
(String) Value to be used to obtain the entity ID of the Identity Provider from the environment (e.g. if using the mod_shib plugin this value is Shib-Identity-Provider). |
sso_callback_template = /etc/keystone/sso_callback_template.html |
(String) Location of Single Sign-On callback handler, will return a token to a trusted dashboard host. |
trusted_dashboard = [] |
(Multi-valued) A list of trusted dashboard hosts. Before accepting a Single Sign-On request to return a token, the origin host must be a member of the trusted_dashboard list. This configuration option may be repeated for multiple values. For example: trusted_dashboard=http://acme.com/auth/websso trusted_dashboard=http://beta.com/auth/websso |
Configuration option = Default value | Description |
---|---|
[fernet_tokens] | |
key_repository = /etc/keystone/fernet-keys/ |
(String) Directory containing Fernet token keys. |
max_active_keys = 3 |
(Integer) This controls how many keys are held in rotation by keystone-manage fernet_rotate before they are discarded. The default value of 3 means that keystone will maintain one staged key, one primary key, and one secondary key. Increasing this value means that additional secondary keys will be kept in the rotation. |
Configuration option = Default value | Description |
---|---|
[identity] | |
cache_time = 600 |
(Integer) Time to cache identity data (in seconds). This has no effect unless global and identity caching are enabled. |
caching = True |
(Boolean) Toggle for identity caching. This has no effect unless global caching is enabled. |
default_domain_id = default |
(String) This references the domain to use for all Identity API v2 requests (which are not aware of domains). A domain with this ID will be created for you by keystone-manage db_sync in migration 008. The domain referenced by this ID cannot be deleted on the v3 API, to prevent accidentally breaking the v2 API. There is nothing special about this domain, other than the fact that it must exist to order to maintain support for your v2 clients. |
domain_config_dir = /etc/keystone/domains |
(String) Path for Keystone to locate the domain specific identity configuration files if domain_specific_drivers_enabled is set to true. |
domain_configurations_from_database = False |
(Boolean) Extract the domain specific configuration options from the resource backend where they have been stored with the domain data. This feature is disabled by default (in which case the domain specific options will be loaded from files in the domain configuration directory); set to true to enable. |
domain_specific_drivers_enabled = False |
(Boolean) A subset (or all) of domains can have their own identity driver, each with their own partial configuration options, stored in either the resource backend or in a file in a domain configuration directory (depending on the setting of domain_configurations_from_database). Only values specific to the domain need to be specified in this manner. This feature is disabled by default; set to true to enable. |
driver = sql |
(String) Entrypoint for the identity backend driver in the keystone.identity namespace. Supplied drivers are ldap and sql. |
list_limit = None |
(Integer) Maximum number of entities that will be returned in an identity collection. |
max_password_length = 4096 |
(Integer) Maximum supported length for user passwords; decrease to improve performance. |
Configuration option = Default value | Description |
---|---|
[kvs] | |
backends = |
(List) Extra dogpile.cache backend modules to register with the dogpile.cache library. |
config_prefix = keystone.kvs |
(String) Prefix for building the configuration dictionary for the KVS region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. |
default_lock_timeout = 5 |
(Integer) Default lock timeout (in seconds) for distributed locking. |
enable_key_mangler = True |
(Boolean) Toggle to disable using a key-mangling function to ensure fixed length keys. This is toggle-able for debugging purposes, it is highly recommended to always leave this set to true. |
Configuration option = Default value | Description |
---|---|
[ldap] | |
alias_dereferencing = default |
(String) The LDAP dereferencing option for queries. The “default” option falls back to using default dereferencing configured by your ldap.conf. |
allow_subtree_delete = False |
(Boolean) Delete subtrees using the subtree delete control. Only enable this option if your LDAP server supports subtree deletion. |
auth_pool_connection_lifetime = 60 |
(Integer) End user auth connection lifetime in seconds. |
auth_pool_size = 100 |
(Integer) End user auth connection pool size. |
chase_referrals = None |
(Boolean) Override the system’s default referral chasing behavior for queries. |
debug_level = None |
(Integer) Sets the LDAP debugging level for LDAP calls. A value of 0 means that debugging is not enabled. This value is a bitmask, consult your LDAP documentation for possible values. |
dumb_member = cn=dumb,dc=nonexistent |
(String) DN of the “dummy member” to use when “use_dumb_member” is enabled. |
group_additional_attribute_mapping = |
(List) Additional attribute mappings for groups. Attribute mapping format is <ldap_attr>:<user_attr>, where ldap_attr is the attribute in the LDAP entry and user_attr is the Identity API attribute. |
group_allow_create = True |
(Boolean) DEPRECATED: Allow group creation in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release. |
group_allow_delete = True |
(Boolean) DEPRECATED: Allow group deletion in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release. |
group_allow_update = True |
(Boolean) DEPRECATED: Allow group update in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release. |
group_attribute_ignore = |
(List) List of attributes stripped off the group on update. |
group_desc_attribute = description |
(String) LDAP attribute mapped to group description. |
group_filter = None |
(String) LDAP search filter for groups. |
group_id_attribute = cn |
(String) LDAP attribute mapped to group id. |
group_member_attribute = member |
(String) LDAP attribute mapped to show group membership. |
group_members_are_ids = False |
(Boolean) If the members of the group objectclass are user IDs rather than DNs, set this to true. This is the case when using posixGroup as the group objectclass and OpenDirectory. |
group_name_attribute = ou |
(String) LDAP attribute mapped to group name. |
group_objectclass = groupOfNames |
(String) LDAP objectclass for groups. |
group_tree_dn = None |
(String) Search base for groups. Defaults to the suffix value. |
page_size = 0 |
(Integer) Maximum results per page; a value of zero (“0”) disables paging. |
password = None |
(String) Password for the BindDN to query the LDAP server. |
pool_connection_lifetime = 600 |
(Integer) Connection lifetime in seconds. |
pool_connection_timeout = -1 |
(Integer) Connector timeout in seconds. Value -1 indicates indefinite wait for response. |
pool_retry_delay = 0.1 |
(Floating point) Time span in seconds to wait between two reconnect trials. |
pool_retry_max = 3 |
(Integer) Maximum count of reconnect trials. |
pool_size = 10 |
(Integer) Connection pool size. |
query_scope = one |
(String) The LDAP scope for queries, “one” represents oneLevel/singleLevel and “sub” represents subtree/wholeSubtree options. |
suffix = cn=example,cn=com |
(String) LDAP server suffix |
tls_cacertdir = None |
(String) CA certificate directory path for communicating with LDAP servers. |
tls_cacertfile = None |
(String) CA certificate file path for communicating with LDAP servers. |
tls_req_cert = demand |
(String) Specifies what checks to perform on client certificates in an incoming TLS session. |
url = ldap://localhost |
(String) URL(s) for connecting to the LDAP server. Multiple LDAP URLs may be specified as a comma separated string. The first URL to successfully bind is used for the connection. |
use_auth_pool = True |
(Boolean) Enable LDAP connection pooling for end user authentication. If use_pool is disabled, then this setting is meaningless and is not used at all. |
use_dumb_member = False |
(Boolean) If true, will add a dummy member to groups. This is required if the objectclass for groups requires the “member” attribute. |
use_pool = True |
(Boolean) Enable LDAP connection pooling. |
use_tls = False |
(Boolean) Enable TLS for communicating with LDAP servers. |
user = None |
(String) User BindDN to query the LDAP server. |
user_additional_attribute_mapping = |
(List) List of additional LDAP attributes used for mapping additional attribute mappings for users. Attribute mapping format is <ldap_attr>:<user_attr>, where ldap_attr is the attribute in the LDAP entry and user_attr is the Identity API attribute. |
user_allow_create = True |
(Boolean) DEPRECATED: Allow user creation in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release. |
user_allow_delete = True |
(Boolean) DEPRECATED: Allow user deletion in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release. |
user_allow_update = True |
(Boolean) DEPRECATED: Allow user updates in LDAP backend. Write support for Identity LDAP backends has been deprecated in the M release and will be removed in the O release. |
user_attribute_ignore = default_project_id |
(List) List of attributes stripped off the user on update. |
user_default_project_id_attribute = None |
(String) LDAP attribute mapped to default_project_id for users. |
user_description_attribute = description |
(String) LDAP attribute mapped to user description. |
user_enabled_attribute = enabled |
(String) LDAP attribute mapped to user enabled flag. |
user_enabled_default = True |
(String) Default value to enable users. This should match an appropriate int value if the LDAP server uses non-boolean (bitmask) values to indicate if a user is enabled or disabled. If this is not set to “True” the typical value is “512”. This is typically used when “user_enabled_attribute = userAccountControl”. |
user_enabled_emulation = False |
(Boolean) If true, Keystone uses an alternative method to determine if a user is enabled or not by checking if they are a member of the “user_enabled_emulation_dn” group. |
user_enabled_emulation_dn = None |
(String) DN of the group entry to hold enabled users when using enabled emulation. |
user_enabled_emulation_use_group_config = False |
(Boolean) Use the “group_member_attribute” and “group_objectclass” settings to determine membership in the emulated enabled group. |
user_enabled_invert = False |
(Boolean) Invert the meaning of the boolean enabled values. Some LDAP servers use a boolean lock attribute where “true” means an account is disabled. Setting “user_enabled_invert = true” will allow these lock attributes to be used. This setting will have no effect if “user_enabled_mask” or “user_enabled_emulation” settings are in use. |
user_enabled_mask = 0 |
(Integer) Bitmask integer to indicate the bit that the enabled value is stored in if the LDAP server represents “enabled” as a bit on an integer rather than a boolean. A value of “0” indicates the mask is not used. If this is not set to “0” the typical value is “2”. This is typically used when “user_enabled_attribute = userAccountControl”. |
user_filter = None |
(String) LDAP search filter for users. |
user_id_attribute = cn |
(String) LDAP attribute mapped to user id. WARNING: must not be a multivalued attribute. |
user_mail_attribute = mail |
(String) LDAP attribute mapped to user email. |
user_name_attribute = sn |
(String) LDAP attribute mapped to user name. |
user_objectclass = inetOrgPerson |
(String) LDAP objectclass for users. |
user_pass_attribute = userPassword |
(String) LDAP attribute mapped to password. |
user_tree_dn = None |
(String) Search base for users. Defaults to the suffix value. |
Configuration option = Default value | Description |
---|---|
[identity_mapping] | |
backward_compatible_ids = True |
(Boolean) The format of user and group IDs changed in Juno for backends that do not generate UUIDs (e.g. LDAP), with keystone providing a hash mapping to the underlying attribute in LDAP. By default this mapping is disabled, which ensures that existing IDs will not change. Even when the mapping is enabled by using domain specific drivers, any users and groups from the default domain being handled by LDAP will still not be mapped to ensure their IDs remain backward compatible. Setting this value to False will enable the mapping for even the default LDAP driver. It is only safe to do this if you do not already have assignments for users and groups from the default LDAP domain, and it is acceptable for Keystone to provide the different IDs to clients than it did previously. Typically this means that the only time you can set this value to False is when configuring a fresh installation. |
driver = sql |
(String) Entrypoint for the identity mapping backend driver in the keystone.identity.id_mapping namespace. |
generator = sha256 |
(String) Entrypoint for the public ID generator for user and group entities in the keystone.identity.id_generator namespace. The Keystone identity mapper only supports generators that produce no more than 64 characters. |
Configuration option = Default value | Description |
---|---|
[memcache] | |
servers = localhost:11211 |
(List) Memcache servers in the format of “host:port”. |
socket_timeout = 3 |
(Integer) Timeout in seconds for every call to a server. This is used by the key value store system (e.g. token pooled memcached persistence backend). |
Configuration option = Default value | Description |
---|---|
[oauth1] | |
access_token_duration = 86400 |
(Integer) Duration (in seconds) for the OAuth Access Token. |
driver = sql |
(String) Entrypoint for the OAuth backend driver in the keystone.oauth1 namespace. |
request_token_duration = 28800 |
(Integer) Duration (in seconds) for the OAuth Request Token. |
Configuration option = Default value | Description |
---|---|
[os_inherit] | |
enabled = True |
(Boolean) DEPRECATED: role-assignment inheritance to projects from owning domain or from projects higher in the hierarchy can be optionally disabled. In the future, this option will be removed and the hierarchy will be always enabled. The option to enable the OS-INHERIT extension has been deprecated in the M release and will be removed in the O release. The OS-INHERIT extension will be enabled by default. |
Configuration option = Default value | Description |
---|---|
[policy] | |
driver = sql |
(String) Entrypoint for the policy backend driver in the keystone.policy namespace. Supplied drivers are rules and sql. |
list_limit = None |
(Integer) Maximum number of entities that will be returned in a policy collection. |
Configuration option = Default value | Description |
---|---|
[revoke] | |
cache_time = 3600 |
(Integer) Time to cache the revocation list and the revocation events (in seconds). This has no effect unless global and token caching are enabled. |
caching = True |
(Boolean) Toggle for revocation event caching. This has no effect unless global caching is enabled. |
driver = sql |
(String) Entrypoint for an implementation of the backend for persisting revocation events in the keystone.revoke namespace. Supplied drivers are kvs and sql. |
expiration_buffer = 1800 |
(Integer) This value (calculated in seconds) is added to token expiration before a revocation event may be removed from the backend. |
Configuration option = Default value | Description |
---|---|
[role] | |
cache_time = None |
(Integer) TTL (in seconds) to cache role data. This has no effect unless global caching is enabled. |
caching = True |
(Boolean) Toggle for role caching. This has no effect unless global caching is enabled. |
driver = None |
(String) Entrypoint for the role backend driver in the keystone.role namespace. Supplied drivers are ldap and sql. |
list_limit = None |
(Integer) Maximum number of entities that will be returned in a role collection. |
Configuration option = Default value | Description |
---|---|
[saml] | |
assertion_expiration_time = 3600 |
(Integer) Default TTL, in seconds, for any generated SAML assertion created by Keystone. |
certfile = /etc/keystone/ssl/certs/signing_cert.pem |
(String) Path of the certfile for SAML signing. For non-production environments, you may be interested in using keystone-manage pki_setup to generate self-signed certificates. Note, the path cannot contain a comma. |
idp_contact_company = None |
(String) Company of contact person. |
idp_contact_email = None |
(String) Email address of contact person. |
idp_contact_name = None |
(String) Given name of contact person |
idp_contact_surname = None |
(String) Surname of contact person. |
idp_contact_telephone = None |
(String) Telephone number of contact person. |
idp_contact_type = other |
(String) The contact type describing the main point of contact for the identity provider. |
idp_entity_id = None |
(String) Entity ID value for unique Identity Provider identification. Usually FQDN is set with a suffix. A value is required to generate IDP Metadata. For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/idp |
idp_lang = en |
(String) Language used by the organization. |
idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml |
(String) Path to the Identity Provider Metadata file. This file should be generated with the keystone-manage saml_idp_metadata command. |
idp_organization_display_name = None |
(String) Organization name to be displayed. |
idp_organization_name = None |
(String) Organization name the installation belongs to. |
idp_organization_url = None |
(String) URL of the organization. |
idp_sso_endpoint = None |
(String) Identity Provider Single-Sign-On service value, required in the Identity Provider’s metadata. A value is required to generate IDP Metadata. For example: https://keystone.example.com/v3/OS-FEDERATION/saml2/sso |
keyfile = /etc/keystone/ssl/private/signing_key.pem |
(String) Path of the keyfile for SAML signing. Note, the path cannot contain a comma. |
relay_state_prefix = ss:mem: |
(String) The prefix to use for the RelayState SAML attribute, used when generating ECP wrapped assertions. |
xmlsec1_binary = xmlsec1 |
(String) Binary to be called for XML signing. Install the appropriate package, specify absolute path or adjust your PATH environment variable if the binary cannot be found. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
crypt_strength = 10000 |
(Integer) The value passed as the keyword “rounds” to passlib’s encrypt method. This option represents a trade off between security and performance. Higher values lead to slower performance, but higher security. Changing this option will only affect newly created passwords as existing password hashes already have a fixed number of rounds applied, so it is safe to tune this option in a running cluster. For more information, see https://pythonhosted.org/passlib/password_hash_api.html#choosing-the-right-rounds-value |
Configuration option = Default value | Description |
---|---|
[token] | |
allow_rescope_scoped_token = True |
(Boolean) Allow rescoping of scoped token. Setting allow_rescoped_scoped_token to false prevents a user from exchanging a scoped token for any other token. |
bind = |
(List) External auth mechanisms that should add bind information to token, e.g., kerberos,x509. |
cache_time = None |
(Integer) Time to cache tokens (in seconds). This has no effect unless global and token caching are enabled. |
caching = True |
(Boolean) Toggle for token system caching. This has no effect unless global caching is enabled. |
driver = sql |
(String) Entrypoint for the token persistence backend driver in the keystone.token.persistence namespace. Supplied drivers are kvs, memcache, memcache_pool, and sql. |
enforce_token_bind = permissive |
(String) Enforcement policy on tokens presented to Keystone with bind information. One of disabled, permissive, strict, required or a specifically required bind mode, e.g., kerberos or x509 to require binding to that authentication. |
expiration = 3600 |
(Integer) Amount of time a token should remain valid (in seconds). |
hash_algorithm = md5 |
(String) DEPRECATED: The hash algorithm to use for PKI tokens. This can be set to any algorithm that hashlib supports. WARNING: Before changing this value, the auth_token middleware must be configured with the hash_algorithms, otherwise token revocation will not be processed correctly. PKI token support has been deprecated in the M release and will be removed in the O release. Fernet or UUID tokens are recommended. |
infer_roles = True |
(Boolean) Add roles to token that are not explicitly added, but that are linked implicitly to other roles. |
provider = uuid |
(String) Controls the token construction, validation, and revocation operations. Entrypoint in the keystone.token.provider namespace. Core providers are [fernet|pkiz|pki|uuid]. |
revoke_by_id = True |
(Boolean) Revoke token by token identifier. Setting revoke_by_id to true enables various forms of enumerating tokens, e.g. list tokens for user. These enumerations are processed to determine the list of tokens to revoke. Only disable if you are switching to using the Revoke extension with a backend other than KVS, which stores events in memory. |
Configuration option = Default value | Description |
---|---|
[tokenless_auth] | |
issuer_attribute = SSL_CLIENT_I_DN |
(String) The issuer attribute that is served as an IdP ID for the X.509 tokenless authorization along with the protocol to look up its corresponding mapping. It is the environment variable in the WSGI environment that references to the issuer of the client certificate. |
protocol = x509 |
(String) The protocol name for the X.509 tokenless authorization along with the option issuer_attribute below can look up its corresponding mapping. |
trusted_issuer = [] |
(Multi-valued) The list of trusted issuers to further filter the certificates that are allowed to participate in the X.509 tokenless authorization. If the option is absent then no certificates will be allowed. The naming format for the attributes of a Distinguished Name(DN) must be separated by a comma and contain no spaces. This configuration option may be repeated for multiple values. For example: trusted_issuer=CN=john,OU=keystone,O=openstack trusted_issuer=CN=mary,OU=eng,O=abc |
Configuration option = Default value | Description |
---|---|
[trust] | |
allow_redelegation = False |
(Boolean) Enable redelegation feature. |
driver = sql |
(String) Entrypoint for the trust backend driver in the keystone.trust namespace. |
enabled = True |
(Boolean) Delegation and impersonation features can be optionally disabled. |
max_redelegation_count = 3 |
(Integer) Maximum depth of trust redelegation. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
The Identity service supports domain-specific Identity drivers installed on an SQL or LDAP back end, and supports domain-specific Identity configuration options, which are stored in domain-specific configuration files. See the Admin guide Identity Management Chapter for more information.
You can find the files described in this section in the /etc/keystone
directory.
Use the keystone.conf
file to configure most Identity service
options:
[DEFAULT]
#
# From keystone
#
# Using this feature is *NOT* recommended. Instead, use the `keystone-manage
# bootstrap` command. The value of this option is treated as a "shared secret"
# that can be used to bootstrap Keystone through the API. This "token" does not
# represent a user (it has no identity), and carries no explicit authorization
# (it effectively bypasses most authorization checks). If set to `None`, the
# value is ignored and the `admin_token` middleware is effectively disabled.
# However, to completely disable `admin_token` in production (highly
# recommended, as it presents a security risk), remove
# `AdminTokenAuthMiddleware` (the `admin_token_auth` filter) from your paste
# application pipelines (for example, in `keystone-paste.ini`). (string value)
#admin_token = <None>
# The base public endpoint URL for Keystone that is advertised to clients
# (NOTE: this does NOT affect how Keystone listens for connections). Defaults
# to the base host URL of the request. For example, if keystone receives a
# request to `http://server:5000/v3/users`, then this will option will be
# automatically treated as `http://server:5000`. You should only need to set
# option if either the value of the base URL contains a path that keystone does
# not automatically infer (`/prefix/v3`), or if the endpoint should be found on
# a different host. (string value)
#public_endpoint = <None>
# The base admin endpoint URL for Keystone that is advertised to clients (NOTE:
# this does NOT affect how Keystone listens for connections). Defaults to the
# base host URL of the request. For example, if keystone receives a request to
# `http://server:35357/v3/users`, then this will option will be automatically
# treated as `http://server:35357`. You should only need to set option if
# either the value of the base URL contains a path that keystone does not
# automatically infer (`/prefix/v3`), or if the endpoint should be found on a
# different host. (string value)
#admin_endpoint = <None>
# Maximum depth of the project hierarchy, excluding the project acting as a
# domain at the top of the hierarchy. WARNING: Setting it to a large value may
# adversely impact performance. (integer value)
#max_project_tree_depth = 5
# Limit the sizes of user & project ID/names. (integer value)
#max_param_size = 64
# Similar to `[DEFAULT] max_param_size`, but provides an exception for token
# values. With PKI / PKIZ tokens, this needs to be set close to 8192 (any
# higher, and other HTTP implementations may break), depending on the size of
# your service catalog and other factors. With Fernet tokens, this can be set
# as low as 255. With UUID tokens, this should be set to 32). (integer value)
#max_token_size = 8192
# Similar to the `[DEFAULT] member_role_name` option, this represents the
# default role ID used to associate users with their default projects in the v2
# API. This will be used as the explicit role where one is not specified by the
# v2 API. You do not need to set this value unless you want keystone to use an
# existing role with a different ID, other than the arbitrarily defined
# `_member_` role (in which case, you should set `[DEFAULT] member_role_name`
# as well). (string value)
#member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab
# This is the role name used in combination with the `[DEFAULT] member_role_id`
# option; see that option for more detail. You do not need to set this option
# unless you want keystone to use an existing role (in which case, you should
# set `[DEFAULT] member_role_id` as well). (string value)
#member_role_name = _member_
# The value passed as the keyword "rounds" to passlib's encrypt method. This
# option represents a trade off between security and performance. Higher values
# lead to slower performance, but higher security. Changing this option will
# only affect newly created passwords as existing password hashes already have
# a fixed number of rounds applied, so it is safe to tune this option in a
# running cluster. For more information, see
# https://pythonhosted.org/passlib/password_hash_api.html#choosing-the-right-
# rounds-value (integer value)
# Minimum value: 1000
# Maximum value: 100000
#crypt_strength = 10000
# The maximum number of entities that will be returned in a collection. This
# global limit may be then overridden for a specific driver, by specifying a
# list_limit in the appropriate section (for example, `[assignment]`). No limit
# is set by default. In larger deployments, it is recommended that you set this
# to a reasonable number to prevent operations like listing all users and
# projects from placing an unnecessary load on the system. (integer value)
#list_limit = <None>
# DEPRECATED: Set this to false if you want to enable the ability for user,
# group and project entities to be moved between domains by updating their
# `domain_id` attribute. Allowing such movement is not recommended if the scope
# of a domain admin is being restricted by use of an appropriate policy file
# (see `etc/policy.v3cloudsample.json` as an example). This feature is
# deprecated and will be removed in a future release, in favor of strictly
# immutable domain IDs. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: The option to set domain_id_immutable to false has been deprecated in
# the M release and will be removed in the O release.
#domain_id_immutable = true
# If set to true, strict password length checking is performed for password
# manipulation. If a password exceeds the maximum length, the operation will
# fail with an HTTP 403 Forbidden error. If set to false, passwords are
# automatically truncated to the maximum length. (boolean value)
#strict_password_check = false
# DEPRECATED: The HTTP header used to determine the scheme for the original
# request, even if it was removed by an SSL terminating proxy. (string value)
# This option is deprecated for removal since N.
# Its value may be silently ignored in the future.
# Reason: This option has been deprecated in the N release and will be removed
# in the P release. Use oslo.middleware.http_proxy_to_wsgi configuration
# instead.
#secure_proxy_ssl_header = HTTP_X_FORWARDED_PROTO
# If set to true, then the server will return information in HTTP responses
# that may allow an unauthenticated or authenticated user to get more
# information than normal, such as additional details about why authentication
# failed. This may be useful for debugging but is insecure. (boolean value)
#insecure_debug = false
# Default `publisher_id` for outgoing notifications. If left undefined,
# Keystone will default to using the server's host name. (string value)
#default_publisher_id = <None>
# Define the notification format for identity service events. A `basic`
# notification only has information about the resource being operated on. A
# `cadf` notification has the same information, as well as information about
# the initiator of the event. The `cadf` option is entirely backwards
# compatible with the `basic` option, but is fully CADF-compliant, and is
# recommended for auditing use cases. (string value)
# Allowed values: basic, cadf
#notification_format = basic
# If left undefined, keystone will emit notifications for all types of events.
# You can reduce the number of notifications keystone emits by using this
# option to enumerate notification topics that should be suppressed. Values are
# expected to be in the form `identity.<resource_type>.<operation>`. This field
# can be set multiple times in order to opt-out of multiple notification
# topics. For example: notification_opt_out=identity.user.create
# notification_opt_out=identity.authenticate.success (multi valued)
#notification_opt_out =
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
#
# From oslo.messaging
#
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2
# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>
# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = keystone
[assignment]
#
# From keystone
#
# Entry point for the assignment backend driver (where role assignments are
# stored) in the `keystone.assignment` namespace. Only a SQL driver is supplied
# by keystone itself. If an assignment driver is not specified, the identity
# driver will choose the assignment driver based on the deprecated
# `[identity]/driver` option (the behavior will be removed in the "O" release).
# Unless you are writing proprietary drivers for keystone, you do not need to
# set this option. (string value)
#driver = <None>
# A list of role names which are prohibited from being an implied role. (list
# value)
#prohibited_implied_role = admin
[auth]
#
# From keystone
#
# Allowed authentication methods. (list value)
#methods = external,password,token,oauth1
# Entry point for the password auth plugin module in the
# `keystone.auth.password` namespace. You do not need to set this unless you
# are overriding keystone's own password authentication plugin. (string value)
#password = <None>
# Entry point for the token auth plugin module in the `keystone.auth.token`
# namespace. You do not need to set this unless you are overriding keystone's
# own token authentication plugin. (string value)
#token = <None>
# Entry point for the external (`REMOTE_USER`) auth plugin module in the
# `keystone.auth.external` namespace. Supplied drivers are `DefaultDomain` and
# `Domain`. The default driver is `DefaultDomain`, which assumes that all users
# identified by the username specified to keystone in the `REMOTE_USER`
# variable exist within the context of the default domain. The `Domain` option
# expects an additional environment variable be presented to keystone,
# `REMOTE_DOMAIN`, containing the domain name of the `REMOTE_USER` (if
# `REMOTE_DOMAIN` is not set, then the default domain will be used instead).
# You do not need to set this unless you are taking advantage of "external
# authentication", where the application server (such as Apache) is handling
# authentication instead of keystone. (string value)
#external = <None>
# Entry point for the OAuth 1.0a auth plugin module in the
# `keystone.auth.oauth1` namespace. You do not need to set this unless you are
# overriding keystone's own `oauth1` authentication plugin. (string value)
#oauth1 = <None>
[cache]
#
# From oslo.cache
#
# Prefix for building the configuration dictionary for the cache region. This
# should not need to be changed unless there is another dogpile.cache region
# with the same configuration name. (string value)
#config_prefix = cache.oslo
# Default TTL, in seconds, for any cached item in the dogpile.cache region.
# This applies to any cached method that doesn't have an explicit cache
# expiration time defined for it. (integer value)
#expiration_time = 600
# Dogpile.cache backend module. It is recommended that Memcache or Redis
# (dogpile.cache.redis) be used in production deployments. For eventlet-based
# or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool)
# is recommended. For low thread servers, dogpile.cache.memcached is
# recommended. Test environments with a single instance of the server can use
# the dogpile.cache.memory backend. (string value)
#backend = dogpile.cache.null
# Arguments supplied to the backend module. Specify this option once per
# argument to be passed to the dogpile.cache backend. Example format:
# "<argname>:<value>". (multi valued)
#backend_argument =
# Proxy classes to import that will affect the way the dogpile.cache backend
# functions. See the dogpile.cache documentation on changing-backend-behavior.
# (list value)
#proxies =
# Global toggle for caching. (boolean value)
#enabled = true
# Extra debugging from the cache backend (cache keys, get/set/delete/etc
# calls). This is only really useful if you need to see the specific cache-
# backend get/set/delete calls with the keys/values. Typically this should be
# left set to false. (boolean value)
#debug_cache_backend = false
# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (list value)
#memcache_servers = localhost:11211
# Number of seconds memcached server is considered dead before it is tried
# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
# (integer value)
#memcache_dead_retry = 300
# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (integer value)
#memcache_socket_timeout = 3
# Max total number of open connections to every memcached server.
# (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_maxsize = 10
# Number of seconds a connection to memcached is held unused in the pool before
# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_unused_timeout = 60
# Number of seconds that an operation will wait to get a memcache client
# connection. (integer value)
#memcache_pool_connection_get_timeout = 10
[catalog]
#
# From keystone
#
# Absolute path to the file used for the templated catalog backend. This option
# is only used if the `[catalog] driver` is set to `templated`. (string value)
#template_file = default_catalog.templates
# Entry point for the catalog driver in the `keystone.catalog` namespace.
# Keystone provides a `sql` option (which supports basic CRUD operations
# through SQL), a `templated` option (which loads the catalog from a templated
# catalog file on disk), and a `endpoint_filter.sql` option (which supports
# arbitrary service catalogs per project). (string value)
#driver = sql
# Toggle for catalog caching. This has no effect unless global caching is
# enabled. In a typical deployment, there is no reason to disable this.
# (boolean value)
#caching = true
# Time to cache catalog data (in seconds). This has no effect unless global and
# catalog caching are both enabled. Catalog data (services, endpoints, etc.)
# typically does not change frequently, and so a longer duration than the
# global default may be desirable. (integer value)
#cache_time = <None>
# Maximum number of entities that will be returned in a catalog collection.
# There is typically no reason to set this, as it would be unusual for a
# deployment to have enough services or endpoints to exceed a reasonable limit.
# (integer value)
#list_limit = <None>
[cors]
#
# From oslo.middleware
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-Domain-Id,X-Domain-Name
[cors.subdomain]
#
# From oslo.middleware
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-Domain-Id,X-Domain-Name
[credential]
#
# From keystone
#
# Entry point for the credential backend driver in the `keystone.credential`
# namespace. Keystone only provides a `sql` driver, so there's no reason to
# change this unless you are providing a custom entry point. (string value)
#driver = sql
# Entry point for credential encryption and decryption operations in the
# `keystone.credential.provider` namespace. Keystone only provides a `fernet`
# driver, so there's no reason to change this unless you are providing a custom
# entry point to encrypt and decrypt credentials. (string value)
#provider = fernet
# Directory containing Fernet keys used to encrypt and decrypt credentials
# stored in the credential backend. Fernet keys used to encrypt credentials
# have no relationship to Fernet keys used to encrypt Fernet tokens. Both sets
# of keys should be managed separately and require different rotation policies.
# Do not share this repository with the repository used to manage keys for
# Fernet tokens. (string value)
#key_repository = /etc/keystone/credential-keys/
[database]
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
[domain_config]
#
# From keystone
#
# Entry point for the domain-specific configuration driver in the
# `keystone.resource.domain_config` namespace. Only a `sql` option is provided
# by keystone, so there is no reason to set this unless you are providing a
# custom entry point. (string value)
#driver = sql
# Toggle for caching of the domain-specific configuration backend. This has no
# effect unless global caching is enabled. There is normally no reason to
# disable this. (boolean value)
#caching = true
# Time-to-live (TTL, in seconds) to cache domain-specific configuration data.
# This has no effect unless `[domain_config] caching` is enabled. (integer
# value)
#cache_time = 300
[endpoint_filter]
#
# From keystone
#
# Entry point for the endpoint filter driver in the `keystone.endpoint_filter`
# namespace. Only a `sql` option is provided by keystone, so there is no reason
# to set this unless you are providing a custom entry point. (string value)
#driver = sql
# This controls keystone's behavior if the configured endpoint filters do not
# result in any endpoints for a user + project pair (and therefore a
# potentially empty service catalog). If set to true, keystone will return the
# entire service catalog. If set to false, keystone will return an empty
# service catalog. (boolean value)
#return_all_endpoints_if_no_filter = true
[endpoint_policy]
#
# From keystone
#
# DEPRECATED: Enable endpoint-policy functionality, which allows policies to be
# associated with either specific endpoints, or endpoints of a given service
# type. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: The option to enable the OS-ENDPOINT-POLICY API extension has been
# deprecated in the M release and will be removed in the O release. The OS-
# ENDPOINT-POLICY API extension will be enabled by default.
#enabled = true
# Entry point for the endpoint policy driver in the `keystone.endpoint_policy`
# namespace. Only a `sql` driver is provided by keystone, so there is no reason
# to set this unless you are providing a custom entry point. (string value)
#driver = sql
[eventlet_server]
#
# From keystone
#
# DEPRECATED: The IP address of the network interface for the public service to
# listen on. (string value)
# Deprecated group/name - [DEFAULT]/bind_host
# Deprecated group/name - [DEFAULT]/public_bind_host
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#public_bind_host = 0.0.0.0
# DEPRECATED: The port number for the public service to listen on. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/public_port
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#public_port = 5000
# DEPRECATED: The IP address of the network interface for the admin service to
# listen on. (string value)
# Deprecated group/name - [DEFAULT]/bind_host
# Deprecated group/name - [DEFAULT]/admin_bind_host
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#admin_bind_host = 0.0.0.0
# DEPRECATED: The port number for the admin service to listen on. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/admin_port
# This option is deprecated for removal since K.
# Its value may be silently ignored in the future.
# Reason: Support for running keystone under eventlet has been removed in the
# Newton release. These options remain for backwards compatibility because they
# are used for URL substitutions.
#admin_port = 35357
[federation]
#
# From keystone
#
# Entry point for the federation backend driver in the `keystone.federation`
# namespace. Keystone only provides a `sql` driver, so there is no reason to
# set this option unless you are providing a custom entry point. (string value)
#driver = sql
# Prefix to use when filtering environment variable names for federated
# assertions. Matched variables are passed into the federated mapping engine.
# (string value)
#assertion_prefix =
# Value to be used to obtain the entity ID of the Identity Provider from the
# environment. For `mod_shib`, this would be `Shib-Identity-Provider`. For For
# `mod_auth_openidc`, this could be `HTTP_OIDC_ISS`. For `mod_auth_mellon`,
# this could be `MELLON_IDP`. (string value)
#remote_id_attribute = <None>
# An arbitrary domain name that is reserved to allow federated ephemeral users
# to have a domain concept. Note that an admin will not be able to create a
# domain with this name or update an existing domain to this name. You are not
# advised to change this value unless you really have to. (string value)
#federated_domain_name = Federated
# A list of trusted dashboard hosts. Before accepting a Single Sign-On request
# to return a token, the origin host must be a member of this list. This
# configuration option may be repeated for multiple values. You must set this
# in order to use web-based SSO flows. For example:
# trusted_dashboard=https://acme.example.com/auth/websso
# trusted_dashboard=https://beta.example.com/auth/websso (multi valued)
#trusted_dashboard =
# Absolute path to an HTML file used as a Single Sign-On callback handler. This
# page is expected to redirect the user from keystone back to a trusted
# dashboard host, by form encoding a token in a POST request. Keystone's
# default value should be sufficient for most deployments. (string value)
#sso_callback_template = /etc/keystone/sso_callback_template.html
# Toggle for federation caching. This has no effect unless global caching is
# enabled. There is typically no reason to disable this. (boolean value)
#caching = true
[fernet_tokens]
#
# From keystone
#
# Directory containing Fernet token keys. This directory must exist before
# using `keystone-manage fernet_setup` for the first time, must be writable by
# the user running `keystone-manage fernet_setup` or `keystone-manage
# fernet_rotate`, and of course must be readable by keystone's server process.
# The repository may contain keys in one of three states: a single staged key
# (always index 0) used for token validation, a single primary key (always the
# highest index) used for token creation and validation, and any number of
# secondary keys (all other index values) used for token validation. With
# multiple keystone nodes, each node must share the same key repository
# contents, with the exception of the staged key (index 0). It is safe to run
# `keystone-manage fernet_rotate` once on any one node to promote a staged key
# (index 0) to be the new primary (incremented from the previous highest
# index), and produce a new staged key (a new key with index 0); the resulting
# repository can then be atomically replicated to other nodes without any risk
# of race conditions (for example, it is safe to run `keystone-manage
# fernet_rotate` on host A, wait any amount of time, create a tarball of the
# directory on host A, unpack it on host B to a temporary location, and
# atomically move (`mv`) the directory into place on host B). Running
# `keystone-manage fernet_rotate` *twice* on a key repository without syncing
# other nodes will result in tokens that can not be validated by all nodes.
# (string value)
#key_repository = /etc/keystone/fernet-keys/
# This controls how many keys are held in rotation by `keystone-manage
# fernet_rotate` before they are discarded. The default value of 3 means that
# keystone will maintain one staged key (always index 0), one primary key (the
# highest numerical index), and one secondary key (every other index).
# Increasing this value means that additional secondary keys will be kept in
# the rotation. (integer value)
# Minimum value: 1
#max_active_keys = 3
[identity]
#
# From keystone
#
# This references the domain to use for all Identity API v2 requests (which are
# not aware of domains). A domain with this ID can optionally be created for
# you by `keystone-manage bootstrap`. The domain referenced by this ID cannot
# be deleted on the v3 API, to prevent accidentally breaking the v2 API. There
# is nothing special about this domain, other than the fact that it must exist
# to order to maintain support for your v2 clients. There is typically no
# reason to change this value. (string value)
#default_domain_id = default
# A subset (or all) of domains can have their own identity driver, each with
# their own partial configuration options, stored in either the resource
# backend or in a file in a domain configuration directory (depending on the
# setting of `[identity] domain_configurations_from_database`). Only values
# specific to the domain need to be specified in this manner. This feature is
# disabled by default, but may be enabled by default in a future release; set
# to true to enable. (boolean value)
#domain_specific_drivers_enabled = false
# By default, domain-specific configuration data is read from files in the
# directory identified by `[identity] domain_config_dir`. Enabling this
# configuration option allows you to instead manage domain-specific
# configurations through the API, which are then persisted in the backend
# (typically, a SQL database), rather than using configuration files on disk.
# (boolean value)
#domain_configurations_from_database = false
# Absolute path where keystone should locate domain-specific `[identity]`
# configuration files. This option has no effect unless `[identity]
# domain_specific_drivers_enabled` is set to true. There is typically no reason
# to change this value. (string value)
#domain_config_dir = /etc/keystone/domains
# Entry point for the identity backend driver in the `keystone.identity`
# namespace. Keystone provides a `sql` and `ldap` driver. This option is also
# used as the default driver selection (along with the other configuration
# variables in this section) in the event that `[identity]
# domain_specific_drivers_enabled` is enabled, but no applicable domain-
# specific configuration is defined for the domain in question. Unless your
# deployment primarily relies on `ldap` AND is not using domain-specific
# configuration, you should typically leave this set to `sql`. (string value)
#driver = sql
# Toggle for identity caching. This has no effect unless global caching is
# enabled. There is typically no reason to disable this. (boolean value)
#caching = true
# Time to cache identity data (in seconds). This has no effect unless global
# and identity caching are enabled. (integer value)
#cache_time = 600
# Maximum allowed length for user passwords. Decrease this value to improve
# performance. Changing this value does not effect existing passwords. (integer
# value)
# Maximum value: 4096
#max_password_length = 4096
# Maximum number of entities that will be returned in an identity collection.
# (integer value)
#list_limit = <None>
[identity_mapping]
#
# From keystone
#
# Entry point for the identity mapping backend driver in the
# `keystone.identity.id_mapping` namespace. Keystone only provides a `sql`
# driver, so there is no reason to change this unless you are providing a
# custom entry point. (string value)
#driver = sql
# Entry point for the public ID generator for user and group entities in the
# `keystone.identity.id_generator` namespace. The Keystone identity mapper only
# supports generators that produce 64 bytes or less. Keystone only provides a
# `sha256` entry point, so there is no reason to change this value unless
# you're providing a custom entry point. (string value)
#generator = sha256
# The format of user and group IDs changed in Juno for backends that do not
# generate UUIDs (for example, LDAP), with keystone providing a hash mapping to
# the underlying attribute in LDAP. By default this mapping is disabled, which
# ensures that existing IDs will not change. Even when the mapping is enabled
# by using domain-specific drivers (`[identity]
# domain_specific_drivers_enabled`), any users and groups from the default
# domain being handled by LDAP will still not be mapped to ensure their IDs
# remain backward compatible. Setting this value to false will enable the new
# mapping for all backends, including the default LDAP driver. It is only
# guaranteed to be safe to enable this option if you do not already have
# assignments for users and groups from the default LDAP domain, and you
# consider it to be acceptable for Keystone to provide the different IDs to
# clients than it did previously (existing IDs in the API will suddenly
# change). Typically this means that the only time you can set this value to
# false is when configuring a fresh installation, although that is the
# recommended value. (boolean value)
#backward_compatible_ids = true
[kvs]
#
# From keystone
#
# Extra `dogpile.cache` backend modules to register with the `dogpile.cache`
# library. It is not necessary to set this value unless you are providing a
# custom KVS backend beyond what `dogpile.cache` already supports. (list value)
#backends =
# Prefix for building the configuration dictionary for the KVS region. This
# should not need to be changed unless there is another `dogpile.cache` region
# with the same configuration name. (string value)
#config_prefix = keystone.kvs
# Set to false to disable using a key-mangling function, which ensures fixed-
# length keys are used in the KVS store. This is configurable for debugging
# purposes, and it is therefore highly recommended to always leave this set to
# true. (boolean value)
#enable_key_mangler = true
# Number of seconds after acquiring a distributed lock that the backend should
# consider the lock to be expired. This option should be tuned relative to the
# longest amount of time that it takes to perform a successful operation. If
# this value is set too low, then a cluster will end up performing work
# redundantly. If this value is set too high, then a cluster will not be able
# to efficiently recover and retry after a failed operation. A non-zero value
# is recommended if the backend supports lock timeouts, as zero prevents locks
# from expiring altogether. (integer value)
# Minimum value: 0
#default_lock_timeout = 5
[ldap]
#
# From keystone
#
# URL(s) for connecting to the LDAP server. Multiple LDAP URLs may be specified
# as a comma separated string. The first URL to successfully bind is used for
# the connection. (string value)
#url = ldap://localhost
# The user name of the administrator bind DN to use when querying the LDAP
# server, if your LDAP server requires it. (string value)
#user = <None>
# The password of the administrator bind DN to use when querying the LDAP
# server, if your LDAP server requires it. (string value)
#password = <None>
# The default LDAP server suffix to use, if a DN is not defined via either
# `[ldap] user_tree_dn` or `[ldap] group_tree_dn`. (string value)
#suffix = cn=example,cn=com
# DEPRECATED: If true, keystone will add a dummy member based on the `[ldap]
# dumb_member` option when creating new groups. This is required if the object
# class for groups requires the `member` attribute. This option is only used
# for write operations. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#use_dumb_member = false
# DEPRECATED: DN of the "dummy member" to use when `[ldap] use_dumb_member` is
# enabled. This option is only used for write operations. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#dumb_member = cn=dumb,dc=nonexistent
# DEPRECATED: Delete subtrees using the subtree delete control. Only enable
# this option if your LDAP server supports subtree deletion. This option is
# only used for write operations. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#allow_subtree_delete = false
# The search scope which defines how deep to search within the search base. A
# value of `one` (representing `oneLevel` or `singleLevel`) indicates a search
# of objects immediately below to the base object, but does not include the
# base object itself. A value of `sub` (representing `subtree` or
# `wholeSubtree`) indicates a search of both the base object itself and the
# entire subtree below it. (string value)
# Allowed values: one, sub
#query_scope = one
# Defines the maximum number of results per page that keystone should request
# from the LDAP server when listing objects. A value of zero (`0`) disables
# paging. (integer value)
# Minimum value: 0
#page_size = 0
# The LDAP dereferencing option to use for queries involving aliases. A value
# of `default` falls back to using default dereferencing behavior configured by
# your `ldap.conf`. A value of `never` prevents aliases from being dereferenced
# at all. A value of `searching` dereferences aliases only after name
# resolution. A value of `finding` dereferences aliases only during name
# resolution. A value of `always` dereferences aliases in all cases. (string
# value)
# Allowed values: never, searching, always, finding, default
#alias_dereferencing = default
# Sets the LDAP debugging level for LDAP calls. A value of 0 means that
# debugging is not enabled. This value is a bitmask, consult your LDAP
# documentation for possible values. (integer value)
# Minimum value: -1
#debug_level = <None>
# Sets keystone's referral chasing behavior across directory partitions. If
# left unset, the system's default behavior will be used. (boolean value)
#chase_referrals = <None>
# The search base to use for users. Defaults to the `[ldap] suffix` value.
# (string value)
#user_tree_dn = <None>
# The LDAP search filter to use for users. (string value)
#user_filter = <None>
# The LDAP object class to use for users. (string value)
#user_objectclass = inetOrgPerson
# The LDAP attribute mapped to user IDs in keystone. This must NOT be a
# multivalued attribute. User IDs are expected to be globally unique across
# keystone domains and URL-safe. (string value)
#user_id_attribute = cn
# The LDAP attribute mapped to user names in keystone. User names are expected
# to be unique only within a keystone domain and are not expected to be URL-
# safe. (string value)
#user_name_attribute = sn
# The LDAP attribute mapped to user descriptions in keystone. (string value)
#user_description_attribute = description
# The LDAP attribute mapped to user emails in keystone. (string value)
#user_mail_attribute = mail
# The LDAP attribute mapped to user passwords in keystone. (string value)
#user_pass_attribute = userPassword
# The LDAP attribute mapped to the user enabled attribute in keystone. If
# setting this option to `userAccountControl`, then you may be interested in
# setting `[ldap] user_enabled_mask` and `[ldap] user_enabled_default` as well.
# (string value)
#user_enabled_attribute = enabled
# Logically negate the boolean value of the enabled attribute obtained from the
# LDAP server. Some LDAP servers use a boolean lock attribute where "true"
# means an account is disabled. Setting `[ldap] user_enabled_invert = true`
# will allow these lock attributes to be used. This option will have no effect
# if either the `[ldap] user_enabled_mask` or `[ldap] user_enabled_emulation`
# options are in use. (boolean value)
#user_enabled_invert = false
# Bitmask integer to select which bit indicates the enabled value if the LDAP
# server represents "enabled" as a bit on an integer rather than as a discrete
# boolean. A value of `0` indicates that the mask is not used. If this is not
# set to `0` the typical value is `2`. This is typically used when `[ldap]
# user_enabled_attribute = userAccountControl`. Setting this option causes
# keystone to ignore the value of `[ldap] user_enabled_invert`. (integer value)
# Minimum value: 0
#user_enabled_mask = 0
# The default value to enable users. This should match an appropriate integer
# value if the LDAP server uses non-boolean (bitmask) values to indicate if a
# user is enabled or disabled. If this is not set to `True`, then the typical
# value is `512`. This is typically used when `[ldap] user_enabled_attribute =
# userAccountControl`. (string value)
#user_enabled_default = True
# DEPRECATED: List of user attributes to ignore on create and update. This is
# only used for write operations. (list value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_attribute_ignore = default_project_id
# The LDAP attribute mapped to a user's default_project_id in keystone. This is
# most commonly used when keystone has write access to LDAP. (string value)
#user_default_project_id_attribute = <None>
# DEPRECATED: If enabled, keystone is allowed to create users in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_allow_create = true
# DEPRECATED: If enabled, keystone is allowed to update users in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_allow_update = true
# DEPRECATED: If enabled, keystone is allowed to delete users in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#user_allow_delete = true
# If enabled, keystone uses an alternative method to determine if a user is
# enabled or not by checking if they are a member of the group defined by the
# `[ldap] user_enabled_emulation_dn` option. Enabling this option causes
# keystone to ignore the value of `[ldap] user_enabled_invert`. (boolean value)
#user_enabled_emulation = false
# DN of the group entry to hold enabled users when using enabled emulation.
# Setting this option has no effect unless `[ldap] user_enabled_emulation` is
# also enabled. (string value)
#user_enabled_emulation_dn = <None>
# Use the `[ldap] group_member_attribute` and `[ldap] group_objectclass`
# settings to determine membership in the emulated enabled group. Enabling this
# option has no effect unless `[ldap] user_enabled_emulation` is also enabled.
# (boolean value)
#user_enabled_emulation_use_group_config = false
# A list of LDAP attribute to keystone user attribute pairs used for mapping
# additional attributes to users in keystone. The expected format is
# `<ldap_attr>:<user_attr>`, where `ldap_attr` is the attribute in the LDAP
# object and `user_attr` is the attribute which should appear in the identity
# API. (list value)
#user_additional_attribute_mapping =
# The search base to use for groups. Defaults to the `[ldap] suffix` value.
# (string value)
#group_tree_dn = <None>
# The LDAP search filter to use for groups. (string value)
#group_filter = <None>
# The LDAP object class to use for groups. If setting this option to
# `posixGroup`, you may also be interested in enabling the `[ldap]
# group_members_are_ids` option. (string value)
#group_objectclass = groupOfNames
# The LDAP attribute mapped to group IDs in keystone. This must NOT be a
# multivalued attribute. Group IDs are expected to be globally unique across
# keystone domains and URL-safe. (string value)
#group_id_attribute = cn
# The LDAP attribute mapped to group names in keystone. Group names are
# expected to be unique only within a keystone domain and are not expected to
# be URL-safe. (string value)
#group_name_attribute = ou
# The LDAP attribute used to indicate that a user is a member of the group.
# (string value)
#group_member_attribute = member
# Enable this option if the members of the group object class are keystone user
# IDs rather than LDAP DNs. This is the case when using `posixGroup` as the
# group object class in Open Directory. (boolean value)
#group_members_are_ids = false
# The LDAP attribute mapped to group descriptions in keystone. (string value)
#group_desc_attribute = description
# DEPRECATED: List of group attributes to ignore on create and update. This is
# only used for write operations. (list value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_attribute_ignore =
# DEPRECATED: If enabled, keystone is allowed to create groups in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_allow_create = true
# DEPRECATED: If enabled, keystone is allowed to update groups in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_allow_update = true
# DEPRECATED: If enabled, keystone is allowed to delete groups in the LDAP
# server. (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: Write support for the LDAP identity backend has been deprecated in
# the Mitaka release and will be removed in the Ocata release.
#group_allow_delete = true
# A list of LDAP attribute to keystone group attribute pairs used for mapping
# additional attributes to groups in keystone. The expected format is
# `<ldap_attr>:<group_attr>`, where `ldap_attr` is the attribute in the LDAP
# object and `group_attr` is the attribute which should appear in the identity
# API. (list value)
#group_additional_attribute_mapping =
# An absolute path to a CA certificate file to use when communicating with LDAP
# servers. This option will take precedence over `[ldap] tls_cacertdir`, so
# there is no reason to set both. (string value)
#tls_cacertfile = <None>
# An absolute path to a CA certificate directory to use when communicating with
# LDAP servers. There is no reason to set this option if you've also set
# `[ldap] tls_cacertfile`. (string value)
#tls_cacertdir = <None>
# Enable TLS when communicating with LDAP servers. You should also set the
# `[ldap] tls_cacertfile` and `[ldap] tls_cacertdir` options when using this
# option. Do not set this option if you are using LDAP over SSL (LDAPS) instead
# of TLS. (boolean value)
#use_tls = false
# Specifies which checks to perform against client certificates on incoming TLS
# sessions. If set to `demand`, then a certificate will always be requested and
# required from the LDAP server. If set to `allow`, then a certificate will
# always be requested but not required from the LDAP server. If set to `never`,
# then a certificate will never be requested. (string value)
# Allowed values: demand, never, allow
#tls_req_cert = demand
# Enable LDAP connection pooling for queries to the LDAP server. There is
# typically no reason to disable this. (boolean value)
#use_pool = true
# The size of the LDAP connection pool. This option has no effect unless
# `[ldap] use_pool` is also enabled. (integer value)
# Minimum value: 1
#pool_size = 10
# The maximum number of times to attempt reconnecting to the LDAP server before
# aborting. A value of zero prevents retries. This option has no effect unless
# `[ldap] use_pool` is also enabled. (integer value)
# Minimum value: 0
#pool_retry_max = 3
# The number of seconds to wait before attempting to reconnect to the LDAP
# server. This option has no effect unless `[ldap] use_pool` is also enabled.
# (floating point value)
#pool_retry_delay = 0.1
# The connection timeout to use with the LDAP server. A value of `-1` means
# that connections will never timeout. This option has no effect unless `[ldap]
# use_pool` is also enabled. (integer value)
# Minimum value: -1
#pool_connection_timeout = -1
# The maximum connection lifetime to the LDAP server in seconds. When this
# lifetime is exceeded, the connection will be unbound and removed from the
# connection pool. This option has no effect unless `[ldap] use_pool` is also
# enabled. (integer value)
# Minimum value: 1
#pool_connection_lifetime = 600
# Enable LDAP connection pooling for end user authentication. There is
# typically no reason to disable this. (boolean value)
#use_auth_pool = true
# The size of the connection pool to use for end user authentication. This
# option has no effect unless `[ldap] use_auth_pool` is also enabled. (integer
# value)
# Minimum value: 1
#auth_pool_size = 100
# The maximum end user authentication connection lifetime to the LDAP server in
# seconds. When this lifetime is exceeded, the connection will be unbound and
# removed from the connection pool. This option has no effect unless `[ldap]
# use_auth_pool` is also enabled. (integer value)
# Minimum value: 1
#auth_pool_connection_lifetime = 60
[matchmaker_redis]
#
# From oslo.messaging
#
# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1
# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379
# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =
# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000
[memcache]
#
# From keystone
#
# Comma-separated list of memcached servers in the format of
# `host:port,host:port` that keystone should use for the `memcache` token
# persistence provider and other memcache-backed KVS drivers. This
# configuration value is NOT used for intermediary caching between keystone and
# other backends, such as SQL and LDAP (for that, see the `[cache]` section).
# Multiple keystone servers in the same deployment should use the same set of
# memcached servers to ensure that data (such as UUID tokens) created by one
# node is available to the others. (list value)
#servers = localhost:11211
# Number of seconds memcached server is considered dead before it is tried
# again. This is used by the key value store system (including, the `memcache`
# and `memcache_pool` options for the `[token] driver` persistence backend).
# (integer value)
#dead_retry = 300
# Timeout in seconds for every call to a server. This is used by the key value
# store system (including, the `memcache` and `memcache_pool` options for the
# `[token] driver` persistence backend). (integer value)
#socket_timeout = 3
# Max total number of open connections to every memcached server. This is used
# by the key value store system (including, the `memcache` and `memcache_pool`
# options for the `[token] driver` persistence backend). (integer value)
#pool_maxsize = 10
# Number of seconds a connection to memcached is held unused in the pool before
# it is closed. This is used by the key value store system (including, the
# `memcache` and `memcache_pool` options for the `[token] driver` persistence
# backend). (integer value)
#pool_unused_timeout = 60
# Number of seconds that an operation will wait to get a memcache client
# connection. This is used by the key value store system (including, the
# `memcache` and `memcache_pool` options for the `[token] driver` persistence
# backend). (integer value)
#pool_connection_get_timeout = 10
[oauth1]
#
# From keystone
#
# Entry point for the OAuth backend driver in the `keystone.oauth1` namespace.
# Typically, there is no reason to set this option unless you are providing a
# custom entry point. (string value)
#driver = sql
# Number of seconds for the OAuth Request Token to remain valid after being
# created. This is the amount of time the user has to authorize the token.
# Setting this option to zero means that request tokens will last forever.
# (integer value)
# Minimum value: 0
#request_token_duration = 28800
# Number of seconds for the OAuth Access Token to remain valid after being
# created. This is the amount of time the consumer has to interact with the
# service provider (which is typically keystone). Setting this option to zero
# means that access tokens will last forever. (integer value)
# Minimum value: 0
#access_token_duration = 86400
[os_inherit]
#
# From keystone
#
# DEPRECATED: This allows domain-based role assignments to be inherited to
# projects owned by that domain, or from parent projects to child projects.
# (boolean value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: The option to disable the OS-INHERIT functionality has been
# deprecated in the Mitaka release and will be removed in the Ocata release.
# Starting in the Ocata release, OS-INHERIT functionality will always be
# enabled.
#enabled = true
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1
# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2
# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30
# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10
# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30
# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30
# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30
# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy' - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic' - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc
# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify
# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast
# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast
# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast
# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>
# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>
# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200
# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100
# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100
[oslo_messaging_notifications]
#
# From oslo.messaging
#
# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>
# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost
# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672
# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest
# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30
# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0
# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800
# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0
# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
# Maximum number of channels to allow (integer value)
#channel_max = <None>
# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>
# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3
# Enable SSL (boolean value)
#ssl = <None>
# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>
# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25
# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25
# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25
# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single
# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30
# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0
# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30
# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600
# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60
# Persist notification messages. (boolean value)
#notification_persistence = false
# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification
# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25
# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60
# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc
# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply
# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100
# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25
# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25
[oslo_messaging_zmq]
#
# From oslo.messaging
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
[oslo_middleware]
#
# From oslo.middleware
#
# The maximum body size for each request, in bytes. (integer value)
# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
# Deprecated group/name - [DEFAULT]/max_request_body_size
#max_request_body_size = 114688
# DEPRECATED: The HTTP Header that will be used to determine what the original
# request protocol scheme was, even if it was hidden by a SSL termination
# proxy. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#secure_proxy_ssl_header = X-Forwarded-Proto
# Whether the application is behind a proxy or not. This determines if the
# middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
[paste_deploy]
#
# From keystone
#
# Name of (or absolute path to) the Paste Deploy configuration file that
# composes middleware and the keystone application itself into actual WSGI
# entry points. See http://pythonpaste.org/deploy/ for additional documentation
# on the file's format. (string value)
#config_file = keystone-paste.ini
[policy]
#
# From keystone
#
# Entry point for the policy backend driver in the `keystone.policy` namespace.
# Supplied drivers are `rules` (which does not support any CRUD operations for
# the v3 policy API) and `sql`. Typically, there is no reason to set this
# option unless you are providing a custom entry point. (string value)
#driver = sql
# Maximum number of entities that will be returned in a policy collection.
# (integer value)
#list_limit = <None>
[profiler]
#
# From osprofiler
#
#
# Enables the profiling for all services on this node. Default value is False
# (fully disable the profiling feature).
#
# Possible values:
#
# * True: Enables the feature
# * False: Disables the feature. The profiling cannot be started via this
# project
# operations. If the profiling is triggered by another project, this project
# part
# will be empty.
# (boolean value)
# Deprecated group/name - [profiler]/profiler_enabled
#enabled = false
#
# Enables SQL requests profiling in services. Default value is False (SQL
# requests won't be traced).
#
# Possible values:
#
# * True: Enables SQL requests profiling. Each SQL query will be part of the
# trace and can the be analyzed by how much time was spent for that.
# * False: Disables SQL requests profiling. The spent time is only shown on a
# higher level of operations. Single SQL queries cannot be analyzed this
# way.
# (boolean value)
#trace_sqlalchemy = false
#
# Secret key(s) to use for encrypting context data for performance profiling.
# This string value should have the following format:
# <key1>[,<key2>,...<keyn>],
# where each key is some random string. A user who triggers the profiling via
# the REST API has to set one of these keys in the headers of the REST API call
# to include profiling results of this node for this particular project.
#
# Both "enabled" flag and "hmac_keys" config options should be set to enable
# profiling. Also, to generate correct profiling information across all
# services
# at least one key needs to be consistent between OpenStack projects. This
# ensures it can be used from client side to generate the trace, containing
# information from all possible resources. (string value)
#hmac_keys = SECRET_KEY
#
# Connection string for a notifier backend. Default value is messaging:// which
# sets the notifier to oslo_messaging.
#
# Examples of possible values:
#
# * messaging://: use oslo_messaging driver for sending notifications.
# (string value)
#connection_string = messaging://
[resource]
#
# From keystone
#
# Entry point for the resource driver in the `keystone.resource` namespace.
# Only a `sql` driver is supplied by keystone. If a resource driver is not
# specified, the assignment driver will choose the resource driver to maintain
# backwards compatibility with older configuration files. (string value)
#driver = <None>
# Toggle for resource caching. This has no effect unless global caching is
# enabled. (boolean value)
# Deprecated group/name - [assignment]/caching
#caching = true
# Time to cache resource data in seconds. This has no effect unless global
# caching is enabled. (integer value)
# Deprecated group/name - [assignment]/cache_time
#cache_time = <None>
# Maximum number of entities that will be returned in a resource collection.
# (integer value)
# Deprecated group/name - [assignment]/list_limit
#list_limit = <None>
# Name of the domain that owns the `admin_project_name`. If left unset, then
# there is no admin project. `[resource] admin_project_name` must also be set
# to use this option. (string value)
#admin_project_domain_name = <None>
# This is a special project which represents cloud-level administrator
# privileges across services. Tokens scoped to this project will contain a true
# `is_admin_project` attribute to indicate to policy systems that the role
# assignments on that specific project should apply equally across every
# project. If left unset, then there is no admin project, and thus no explicit
# means of cross-project role assignments. `[resource]
# admin_project_domain_name` must also be set to use this option. (string
# value)
#admin_project_name = <None>
# This controls whether the names of projects are restricted from containing
# URL-reserved characters. If set to `new`, attempts to create or update a
# project with a URL-unsafe name will fail. If set to `strict`, attempts to
# scope a token with a URL-unsafe project name will fail, thereby forcing all
# project names to be updated to be URL-safe. (string value)
# Allowed values: off, new, strict
#project_name_url_safe = off
# This controls whether the names of domains are restricted from containing
# URL-reserved characters. If set to `new`, attempts to create or update a
# domain with a URL-unsafe name will fail. If set to `strict`, attempts to
# scope a token with a URL-unsafe domain name will fail, thereby forcing all
# domain names to be updated to be URL-safe. (string value)
# Allowed values: off, new, strict
#domain_name_url_safe = off
[revoke]
#
# From keystone
#
# Entry point for the token revocation backend driver in the `keystone.revoke`
# namespace. Keystone only provides a `sql` driver, so there is no reason to
# set this option unless you are providing a custom entry point. (string value)
#driver = sql
# The number of seconds after a token has expired before a corresponding
# revocation event may be purged from the backend. (integer value)
# Minimum value: 0
#expiration_buffer = 1800
# Toggle for revocation event caching. This has no effect unless global caching
# is enabled. (boolean value)
#caching = true
# Time to cache the revocation list and the revocation events (in seconds).
# This has no effect unless global and `[revoke] caching` are both enabled.
# (integer value)
# Deprecated group/name - [token]/revocation_cache_time
#cache_time = 3600
[role]
#
# From keystone
#
# Entry point for the role backend driver in the `keystone.role` namespace.
# Keystone only provides a `sql` driver, so there's no reason to change this
# unless you are providing a custom entry point. (string value)
#driver = <None>
# Toggle for role caching. This has no effect unless global caching is enabled.
# In a typical deployment, there is no reason to disable this. (boolean value)
#caching = true
# Time to cache role data, in seconds. This has no effect unless both global
# caching and `[role] caching` are enabled. (integer value)
#cache_time = <None>
# Maximum number of entities that will be returned in a role collection. This
# may be useful to tune if you have a large number of discrete roles in your
# deployment. (integer value)
#list_limit = <None>
[saml]
#
# From keystone
#
# Determines the lifetime for any SAML assertions generated by keystone, using
# `NotOnOrAfter` attributes. (integer value)
#assertion_expiration_time = 3600
# Name of, or absolute path to, the binary to be used for XML signing. Although
# only the XML Security Library (`xmlsec1`) is supported, it may have a non-
# standard name or path on your system. If keystone cannot find the binary
# itself, you may need to install the appropriate package, use this option to
# specify an absolute path, or adjust keystone's PATH environment variable.
# (string value)
#xmlsec1_binary = xmlsec1
# Absolute path to the public certificate file to use for SAML signing. The
# value cannot contain a comma (`,`). (string value)
#certfile = /etc/keystone/ssl/certs/signing_cert.pem
# Absolute path to the private key file to use for SAML signing. The value
# cannot contain a comma (`,`). (string value)
#keyfile = /etc/keystone/ssl/private/signing_key.pem
# This is the unique entity identifier of the identity provider (keystone) to
# use when generating SAML assertions. This value is required to generate
# identity provider metadata and must be a URI (a URL is recommended). For
# example: `https://keystone.example.com/v3/OS-FEDERATION/saml2/idp`. (uri
# value)
#idp_entity_id = <None>
# This is the single sign-on (SSO) service location of the identity provider
# which accepts HTTP POST requests. A value is required to generate identity
# provider metadata. For example: `https://keystone.example.com/v3/OS-
# FEDERATION/saml2/sso`. (uri value)
#idp_sso_endpoint = <None>
# This is the language used by the identity provider's organization. (string
# value)
#idp_lang = en
# This is the name of the identity provider's organization. (string value)
#idp_organization_name = SAML Identity Provider
# This is the name of the identity provider's organization to be displayed.
# (string value)
#idp_organization_display_name = OpenStack SAML Identity Provider
# This is the URL of the identity provider's organization. The URL referenced
# here should be useful to humans. (uri value)
#idp_organization_url = https://example.com/
# This is the company name of the identity provider's contact person. (string
# value)
#idp_contact_company = Example, Inc.
# This is the given name of the identity provider's contact person. (string
# value)
#idp_contact_name = SAML Identity Provider Support
# This is the surname of the identity provider's contact person. (string value)
#idp_contact_surname = Support
# This is the email address of the identity provider's contact person. (string
# value)
#idp_contact_email = support@example.com
# This is the telephone number of the identity provider's contact person.
# (string value)
#idp_contact_telephone = +1 800 555 0100
# This is the type of contact that best describes the identity provider's
# contact person. (string value)
# Allowed values: technical, support, administrative, billing, other
#idp_contact_type = other
# Absolute path to the identity provider metadata file. This file should be
# generated with the `keystone-manage saml_idp_metadata` command. There is
# typically no reason to change this value. (string value)
#idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml
# The prefix of the RelayState SAML attribute to use when generating enhanced
# client and proxy (ECP) assertions. In a typical deployment, there is no
# reason to change this value. (string value)
#relay_state_prefix = ss:mem:
[security_compliance]
#
# From keystone
#
# The maximum number of days a user can go without authenticating before being
# considered "inactive" and automatically disabled (locked). This feature is
# disabled by default; set any value to enable it. This feature depends on the
# `sql` backend for the `[identity] driver`. When a user exceeds this threshold
# and is considered "inactive", the user's `enabled` attribute in the HTTP API
# may not match the value of the user's `enabled` column in the user table.
# (integer value)
# Minimum value: 1
#disable_user_account_days_inactive = <None>
# The maximum number of times that a user can fail to authenticate before the
# user account is locked for the number of seconds specified by
# `[security_compliance] lockout_duration`. This feature is disabled by
# default. If this feature is enabled and `[security_compliance]
# lockout_duration` is not set, then users may be locked out indefinitely until
# the user is explicitly enabled via the API. This feature depends on the `sql`
# backend for the `[identity] driver`. (integer value)
# Minimum value: 1
#lockout_failure_attempts = <None>
# The number of seconds a user account will be locked when the maximum number
# of failed authentication attempts (as specified by `[security_compliance]
# lockout_failure_attempts`) is exceeded. Setting this option will have no
# effect unless you also set `[security_compliance] lockout_failure_attempts`
# to a non-zero value. This feature depends on the `sql` backend for the
# `[identity] driver`. (integer value)
# Minimum value: 1
#lockout_duration = 1800
# The number of days for which a password will be considered valid before
# requiring it to be changed. This feature is disabled by default. If enabled,
# new password changes will have an expiration date, however existing passwords
# would not be impacted. This feature depends on the `sql` backend for the
# `[identity] driver`. (integer value)
# Minimum value: 1
#password_expires_days = <None>
# Comma separated list of user IDs to be ignored when checking if a password is
# expired. Passwords for users in this list will not expire. This feature will
# only be enabled if `[security_compliance] password_expires_days` is set.
# (list value)
#password_expires_ignore_user_ids =
# This controls the number of previous user password iterations to keep in
# history, in order to enforce that newly created passwords are unique. Setting
# the value to one (the default) disables this feature. Thus, to enable this
# feature, values must be greater than 1. This feature depends on the `sql`
# backend for the `[identity] driver`. (integer value)
# Minimum value: 1
#unique_last_password_count = 1
# The number of days that a password must be used before the user can change
# it. This prevents users from changing their passwords immediately in order to
# wipe out their password history and reuse an old password. This feature does
# not prevent administrators from manually resetting passwords. It is disabled
# by default and allows for immediate password changes. This feature depends on
# the `sql` backend for the `[identity] driver`. Note: If
# `[security_compliance] password_expires_days` is set, then the value for this
# option should be less than the `password_expires_days`. (integer value)
# Minimum value: 0
#minimum_password_age = 0
# The regular expression used to validate password strength requirements. By
# default, the regular expression will match any password. The following is an
# example of a pattern which requires at least 1 letter, 1 digit, and have a
# minimum length of 7 characters: ^(?=.*\d)(?=.*[a-zA-Z]).{7,}$ This feature
# depends on the `sql` backend for the `[identity] driver`. (string value)
#password_regex = <None>
# Describe your password regular expression here in language for humans. If a
# password fails to match the regular expression, the contents of this
# configuration variable will be returned to users to explain why their
# requested password was insufficient. (string value)
#password_regex_description = <None>
[shadow_users]
#
# From keystone
#
# Entry point for the shadow users backend driver in the
# `keystone.identity.shadow_users` namespace. This driver is used for
# persisting local user references to externally-managed identities (via
# federation, LDAP, etc). Keystone only provides a `sql` driver, so there is no
# reason to change this option unless you are providing a custom entry point.
# (string value)
#driver = sql
[signing]
#
# From keystone
#
# DEPRECATED: Absolute path to the public certificate file to use for signing
# PKI and PKIZ tokens. Set this together with `[signing] keyfile`. For non-
# production environments, you may be interested in using `keystone-manage
# pki_setup` to generate self-signed certificates. There is no reason to set
# this option unless you are using either a `pki` or `pkiz` `[token] provider`.
# (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#certfile = /etc/keystone/ssl/certs/signing_cert.pem
# DEPRECATED: Absolute path to the private key file to use for signing PKI and
# PKIZ tokens. Set this together with `[signing] certfile`. There is no reason
# to set this option unless you are using either a `pki` or `pkiz` `[token]
# provider`. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#keyfile = /etc/keystone/ssl/private/signing_key.pem
# DEPRECATED: Absolute path to the public certificate authority (CA) file to
# use when creating self-signed certificates with `keystone-manage pki_setup`.
# Set this together with `[signing] ca_key`. There is no reason to set this
# option unless you are using a `pki` or `pkiz` `[token] provider` value in a
# non-production environment. Use a `[signing] certfile` issued from a trusted
# certificate authority instead. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#ca_certs = /etc/keystone/ssl/certs/ca.pem
# DEPRECATED: Absolute path to the private certificate authority (CA) key file
# to use when creating self-signed certificates with `keystone-manage
# pki_setup`. Set this together with `[signing] ca_certs`. There is no reason
# to set this option unless you are using a `pki` or `pkiz` `[token] provider`
# value in a non-production environment. Use a `[signing] certfile` issued from
# a trusted certificate authority instead. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#ca_key = /etc/keystone/ssl/private/cakey.pem
# DEPRECATED: Key size (in bits) to use when generating a self-signed token
# signing certificate. There is no reason to set this option unless you are
# using a `pki` or `pkiz` `[token] provider` value in a non-production
# environment. Use a `[signing] certfile` issued from a trusted certificate
# authority instead. (integer value)
# Minimum value: 1024
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#key_size = 2048
# DEPRECATED: The validity period (in days) to use when generating a self-
# signed token signing certificate. There is no reason to set this option
# unless you are using a `pki` or `pkiz` `[token] provider` value in a non-
# production environment. Use a `[signing] certfile` issued from a trusted
# certificate authority instead. (integer value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#valid_days = 3650
# DEPRECATED: The certificate subject to use when generating a self-signed
# token signing certificate. There is no reason to set this option unless you
# are using a `pki` or `pkiz` `[token] provider` value in a non-production
# environment. Use a `[signing] certfile` issued from a trusted certificate
# authority instead. (string value)
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com
[token]
#
# From keystone
#
# This is a list of external authentication mechanisms which should add token
# binding metadata to tokens, such as `kerberos` or `x509`. Binding metadata is
# enforced according to the `[token] enforce_token_bind` option. (list value)
#bind =
# This controls the token binding enforcement policy on tokens presented to
# keystone with token binding metadata (as specified by the `[token] bind`
# option). `disabled` completely bypasses token binding validation.
# `permissive` and `strict` do not require tokens to have binding metadata (but
# will validate it if present), whereas `required` will always demand tokens to
# having binding metadata. `permissive` will allow unsupported binding metadata
# to pass through without validation (usually to be validated at another time
# by another component), whereas `strict` and `required` will demand that the
# included binding metadata be supported by keystone. (string value)
# Allowed values: disabled, permissive, strict, required
#enforce_token_bind = permissive
# The amount of time that a token should remain valid (in seconds). Drastically
# reducing this value may break "long-running" operations that involve multiple
# services to coordinate together, and will force users to authenticate with
# keystone more frequently. Drastically increasing this value will increase
# load on the `[token] driver`, as more tokens will be simultaneously valid.
# Keystone tokens are also bearer tokens, so a shorter duration will also
# reduce the potential security impact of a compromised token. (integer value)
# Minimum value: 0
# Maximum value: 9223372036854775807
#expiration = 3600
# Entry point for the token provider in the `keystone.token.provider`
# namespace. The token provider controls the token construction, validation,
# and revocation operations. Keystone includes `fernet`, `pkiz`, `pki`, and
# `uuid` token providers. `uuid` tokens must be persisted (using the backend
# specified in the `[token] driver` option), but do not require any extra
# configuration or setup. `fernet` tokens do not need to be persisted at all,
# but require that you run `keystone-manage fernet_setup` (also see the
# `keystone-manage fernet_rotate` command). `pki` and `pkiz` tokens can be
# validated offline, without making HTTP calls to keystone, but require that
# certificates be installed and distributed to facilitate signing tokens and
# later validating those signatures. (string value)
#provider = uuid
# Entry point for the token persistence backend driver in the
# `keystone.token.persistence` namespace. Keystone provides `kvs`, `memcache`,
# `memcache_pool`, and `sql` drivers. The `kvs` backend depends on the
# configuration in the `[kvs]` section. The `memcache` and `memcache_pool`
# options depend on the configuration in the `[memcache]` section. The `sql`
# option (default) depends on the options in your `[database]` section. If
# you're using the `fernet` `[token] provider`, this backend will not be
# utilized to persist tokens at all. (string value)
#driver = sql
# Toggle for caching token creation and validation data. This has no effect
# unless global caching is enabled. (boolean value)
#caching = true
# The number of seconds to cache token creation and validation data. This has
# no effect unless both global and `[token] caching` are enabled. (integer
# value)
# Minimum value: 0
# Maximum value: 9223372036854775807
#cache_time = <None>
# This toggles support for revoking individual tokens by the token identifier
# and thus various token enumeration operations (such as listing all tokens
# issued to a specific user). These operations are used to determine the list
# of tokens to consider revoked. Do not disable this option if you're using the
# `kvs` `[revoke] driver`. (boolean value)
#revoke_by_id = true
# This toggles whether scoped tokens may be be re-scoped to a new project or
# domain, thereby preventing users from exchanging a scoped token (including
# those with a default project scope) for any other token. This forces users to
# either authenticate for unscoped tokens (and later exchange that unscoped
# token for tokens with a more specific scope) or to provide their credentials
# in every request for a scoped token to avoid re-scoping altogether. (boolean
# value)
#allow_rescope_scoped_token = true
# DEPRECATED: This controls the hash algorithm to use to uniquely identify PKI
# tokens without having to transmit the entire token to keystone (which may be
# several kilobytes). This can be set to any algorithm that hashlib supports.
# WARNING: Before changing this value, the `auth_token` middleware protecting
# all other services must be configured with the set of hash algorithms to
# expect from keystone (both your old and new value for this option), otherwise
# token revocation will not be processed correctly. (string value)
# Allowed values: md5, sha1, sha224, sha256, sha384, sha512
# This option is deprecated for removal since M.
# Its value may be silently ignored in the future.
# Reason: PKI token support has been deprecated in the M release and will be
# removed in the O release. Fernet or UUID tokens are recommended.
#hash_algorithm = md5
# This controls whether roles should be included with tokens that are not
# directly assigned to the token's scope, but are instead linked implicitly to
# other role assignments. (boolean value)
#infer_roles = true
# Enable storing issued token data to token validation cache so that first
# token validation doesn't actually cause full validation cycle. (boolean
# value)
#cache_on_issue = false
[tokenless_auth]
#
# From keystone
#
# The list of distinguished names which identify trusted issuers of client
# certificates allowed to use X.509 tokenless authorization. If the option is
# absent then no certificates will be allowed. The format for the values of a
# distinguished name (DN) must be separated by a comma and contain no spaces.
# Furthermore, because an individual DN may contain commas, this configuration
# option may be repeated multiple times to represent multiple values. For
# example, keystone.conf would include two consecutive lines in order to trust
# two different DNs, such as `trusted_issuer = CN=john,OU=keystone,O=openstack`
# and `trusted_issuer = CN=mary,OU=eng,O=abc`. (multi valued)
#trusted_issuer =
# The federated protocol ID used to represent X.509 tokenless authorization.
# This is used in combination with the value of `[tokenless_auth]
# issuer_attribute` to find a corresponding federated mapping. In a typical
# deployment, there is no reason to change this value. (string value)
#protocol = x509
# The name of the WSGI environment variable used to pass the issuer of the
# client certificate to keystone. This attribute is used as an identity
# provider ID for the X.509 tokenless authorization along with the protocol to
# look up its corresponding mapping. In a typical deployment, there is no
# reason to change this value. (string value)
#issuer_attribute = SSL_CLIENT_I_DN
[trust]
#
# From keystone
#
# Delegation and impersonation features using trusts can be optionally
# disabled. (boolean value)
#enabled = true
# Allows authorization to be redelegated from one user to another, effectively
# chaining trusts together. When disabled, the `remaining_uses` attribute of a
# trust is constrained to be zero. (boolean value)
#allow_redelegation = false
# Maximum number of times that authorization can be redelegated from one user
# to another in a chain of trusts. This number may be reduced further for a
# specific trust. (integer value)
#max_redelegation_count = 3
# Entry point for the trust backend driver in the `keystone.trust` namespace.
# Keystone only provides a `sql` driver, so there is no reason to change this
# unless you are providing a custom entry point. (string value)
#driver = sql
Use the keystone-paste.ini
file to configure the Web Service Gateway
Interface (WSGI) middleware pipeline for the Identity service:
# Keystone PasteDeploy configuration file.
[filter:debug]
use = egg:oslo.middleware#debug
[filter:request_id]
use = egg:oslo.middleware#request_id
[filter:build_auth_context]
use = egg:keystone#build_auth_context
[filter:token_auth]
use = egg:keystone#token_auth
[filter:admin_token_auth]
# This is deprecated in the M release and will be removed in the O release.
# Use `keystone-manage bootstrap` and remove this from the pipelines below.
use = egg:keystone#admin_token_auth
[filter:json_body]
use = egg:keystone#json_body
[filter:cors]
use = egg:oslo.middleware#cors
oslo_config_project = keystone
[filter:http_proxy_to_wsgi]
use = egg:oslo.middleware#http_proxy_to_wsgi
[filter:ec2_extension]
use = egg:keystone#ec2_extension
[filter:ec2_extension_v3]
use = egg:keystone#ec2_extension_v3
[filter:s3_extension]
use = egg:keystone#s3_extension
[filter:url_normalize]
use = egg:keystone#url_normalize
[filter:sizelimit]
use = egg:oslo.middleware#sizelimit
[filter:osprofiler]
use = egg:osprofiler#osprofiler
[app:public_service]
use = egg:keystone#public_service
[app:service_v3]
use = egg:keystone#service_v3
[app:admin_service]
use = egg:keystone#admin_service
[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension public_service
[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension s3_extension admin_service
[pipeline:api_v3]
# The last item in this pipeline must be service_v3 or an equivalent
# application. It cannot be a filter.
pipeline = cors sizelimit http_proxy_to_wsgi osprofiler url_normalize request_id admin_token_auth build_auth_context token_auth json_body ec2_extension_v3 s3_extension service_v3
[app:public_version_service]
use = egg:keystone#public_version_service
[app:admin_version_service]
use = egg:keystone#admin_version_service
[pipeline:public_version_api]
pipeline = cors sizelimit osprofiler url_normalize public_version_service
[pipeline:admin_version_api]
pipeline = cors sizelimit osprofiler url_normalize admin_version_service
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
You can specify a special logging configuration file in the keystone.conf
configuration file. For example, /etc/keystone/logging.conf
.
For details, see the Python logging module documentation.
[loggers]
keys=root,access
[handlers]
keys=production,file,access_file,devel
[formatters]
keys=minimal,normal,debug
###########
# Loggers #
###########
[logger_root]
level=WARNING
handlers=file
[logger_access]
level=INFO
qualname=access
handlers=access_file
################
# Log Handlers #
################
[handler_production]
class=handlers.SysLogHandler
level=ERROR
formatter=normal
args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER)
[handler_file]
class=handlers.WatchedFileHandler
level=WARNING
formatter=normal
args=('error.log',)
[handler_access_file]
class=handlers.WatchedFileHandler
level=INFO
formatter=minimal
args=('access.log',)
[handler_devel]
class=StreamHandler
level=NOTSET
formatter=debug
args=(sys.stdout,)
##################
# Log Formatters #
##################
[formatter_minimal]
format=%(message)s
[formatter_normal]
format=(%(name)s): %(asctime)s %(levelname)s %(message)s
[formatter_debug]
format=(%(name)s): %(asctime)s %(levelname)s %(module)s %(funcName)s %(message)s
Use the policy.json
file to define additional access controls that apply to
the Identity service:
{
"admin_required": "role:admin or is_admin:1",
"service_role": "role:service",
"service_or_admin": "rule:admin_required or rule:service_role",
"owner" : "user_id:%(user_id)s",
"admin_or_owner": "rule:admin_required or rule:owner",
"token_subject": "user_id:%(target.token.user_id)s",
"admin_or_token_subject": "rule:admin_required or rule:token_subject",
"service_admin_or_token_subject": "rule:service_or_admin or rule:token_subject",
"default": "rule:admin_required",
"identity:get_region": "",
"identity:list_regions": "",
"identity:create_region": "rule:admin_required",
"identity:update_region": "rule:admin_required",
"identity:delete_region": "rule:admin_required",
"identity:get_service": "rule:admin_required",
"identity:list_services": "rule:admin_required",
"identity:create_service": "rule:admin_required",
"identity:update_service": "rule:admin_required",
"identity:delete_service": "rule:admin_required",
"identity:get_endpoint": "rule:admin_required",
"identity:list_endpoints": "rule:admin_required",
"identity:create_endpoint": "rule:admin_required",
"identity:update_endpoint": "rule:admin_required",
"identity:delete_endpoint": "rule:admin_required",
"identity:get_domain": "rule:admin_required or token.project.domain.id:%(target.domain.id)s",
"identity:list_domains": "rule:admin_required",
"identity:create_domain": "rule:admin_required",
"identity:update_domain": "rule:admin_required",
"identity:delete_domain": "rule:admin_required",
"identity:get_project": "rule:admin_required or project_id:%(target.project.id)s",
"identity:list_projects": "rule:admin_required",
"identity:list_user_projects": "rule:admin_or_owner",
"identity:create_project": "rule:admin_required",
"identity:update_project": "rule:admin_required",
"identity:delete_project": "rule:admin_required",
"identity:get_user": "rule:admin_or_owner",
"identity:list_users": "rule:admin_required",
"identity:create_user": "rule:admin_required",
"identity:update_user": "rule:admin_required",
"identity:delete_user": "rule:admin_required",
"identity:change_password": "rule:admin_or_owner",
"identity:get_group": "rule:admin_required",
"identity:list_groups": "rule:admin_required",
"identity:list_groups_for_user": "rule:admin_or_owner",
"identity:create_group": "rule:admin_required",
"identity:update_group": "rule:admin_required",
"identity:delete_group": "rule:admin_required",
"identity:list_users_in_group": "rule:admin_required",
"identity:remove_user_from_group": "rule:admin_required",
"identity:check_user_in_group": "rule:admin_required",
"identity:add_user_to_group": "rule:admin_required",
"identity:get_credential": "rule:admin_required",
"identity:list_credentials": "rule:admin_required",
"identity:create_credential": "rule:admin_required",
"identity:update_credential": "rule:admin_required",
"identity:delete_credential": "rule:admin_required",
"identity:ec2_get_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)",
"identity:ec2_list_credentials": "rule:admin_or_owner",
"identity:ec2_create_credential": "rule:admin_or_owner",
"identity:ec2_delete_credential": "rule:admin_required or (rule:owner and user_id:%(target.credential.user_id)s)",
"identity:get_role": "rule:admin_required",
"identity:list_roles": "rule:admin_required",
"identity:create_role": "rule:admin_required",
"identity:update_role": "rule:admin_required",
"identity:delete_role": "rule:admin_required",
"identity:get_domain_role": "rule:admin_required",
"identity:list_domain_roles": "rule:admin_required",
"identity:create_domain_role": "rule:admin_required",
"identity:update_domain_role": "rule:admin_required",
"identity:delete_domain_role": "rule:admin_required",
"identity:get_implied_role": "rule:admin_required ",
"identity:list_implied_roles": "rule:admin_required",
"identity:create_implied_role": "rule:admin_required",
"identity:delete_implied_role": "rule:admin_required",
"identity:list_role_inference_rules": "rule:admin_required",
"identity:check_implied_role": "rule:admin_required",
"identity:check_grant": "rule:admin_required",
"identity:list_grants": "rule:admin_required",
"identity:create_grant": "rule:admin_required",
"identity:revoke_grant": "rule:admin_required",
"identity:list_role_assignments": "rule:admin_required",
"identity:list_role_assignments_for_tree": "rule:admin_required",
"identity:get_policy": "rule:admin_required",
"identity:list_policies": "rule:admin_required",
"identity:create_policy": "rule:admin_required",
"identity:update_policy": "rule:admin_required",
"identity:delete_policy": "rule:admin_required",
"identity:check_token": "rule:admin_or_token_subject",
"identity:validate_token": "rule:service_admin_or_token_subject",
"identity:validate_token_head": "rule:service_or_admin",
"identity:revocation_list": "rule:service_or_admin",
"identity:revoke_token": "rule:admin_or_token_subject",
"identity:create_trust": "user_id:%(trust.trustor_user_id)s",
"identity:list_trusts": "",
"identity:list_roles_for_trust": "",
"identity:get_role_for_trust": "",
"identity:delete_trust": "",
"identity:create_consumer": "rule:admin_required",
"identity:get_consumer": "rule:admin_required",
"identity:list_consumers": "rule:admin_required",
"identity:delete_consumer": "rule:admin_required",
"identity:update_consumer": "rule:admin_required",
"identity:authorize_request_token": "rule:admin_required",
"identity:list_access_token_roles": "rule:admin_required",
"identity:get_access_token_role": "rule:admin_required",
"identity:list_access_tokens": "rule:admin_required",
"identity:get_access_token": "rule:admin_required",
"identity:delete_access_token": "rule:admin_required",
"identity:list_projects_for_endpoint": "rule:admin_required",
"identity:add_endpoint_to_project": "rule:admin_required",
"identity:check_endpoint_in_project": "rule:admin_required",
"identity:list_endpoints_for_project": "rule:admin_required",
"identity:remove_endpoint_from_project": "rule:admin_required",
"identity:create_endpoint_group": "rule:admin_required",
"identity:list_endpoint_groups": "rule:admin_required",
"identity:get_endpoint_group": "rule:admin_required",
"identity:update_endpoint_group": "rule:admin_required",
"identity:delete_endpoint_group": "rule:admin_required",
"identity:list_projects_associated_with_endpoint_group": "rule:admin_required",
"identity:list_endpoints_associated_with_endpoint_group": "rule:admin_required",
"identity:get_endpoint_group_in_project": "rule:admin_required",
"identity:list_endpoint_groups_for_project": "rule:admin_required",
"identity:add_endpoint_group_to_project": "rule:admin_required",
"identity:remove_endpoint_group_from_project": "rule:admin_required",
"identity:create_identity_provider": "rule:admin_required",
"identity:list_identity_providers": "rule:admin_required",
"identity:get_identity_providers": "rule:admin_required",
"identity:update_identity_provider": "rule:admin_required",
"identity:delete_identity_provider": "rule:admin_required",
"identity:create_protocol": "rule:admin_required",
"identity:update_protocol": "rule:admin_required",
"identity:get_protocol": "rule:admin_required",
"identity:list_protocols": "rule:admin_required",
"identity:delete_protocol": "rule:admin_required",
"identity:create_mapping": "rule:admin_required",
"identity:get_mapping": "rule:admin_required",
"identity:list_mappings": "rule:admin_required",
"identity:delete_mapping": "rule:admin_required",
"identity:update_mapping": "rule:admin_required",
"identity:create_service_provider": "rule:admin_required",
"identity:list_service_providers": "rule:admin_required",
"identity:get_service_provider": "rule:admin_required",
"identity:update_service_provider": "rule:admin_required",
"identity:delete_service_provider": "rule:admin_required",
"identity:get_auth_catalog": "",
"identity:get_auth_projects": "",
"identity:get_auth_domains": "",
"identity:list_projects_for_user": "",
"identity:list_domains_for_user": "",
"identity:list_revoke_events": "",
"identity:create_policy_association_for_endpoint": "rule:admin_required",
"identity:check_policy_association_for_endpoint": "rule:admin_required",
"identity:delete_policy_association_for_endpoint": "rule:admin_required",
"identity:create_policy_association_for_service": "rule:admin_required",
"identity:check_policy_association_for_service": "rule:admin_required",
"identity:delete_policy_association_for_service": "rule:admin_required",
"identity:create_policy_association_for_region_and_service": "rule:admin_required",
"identity:check_policy_association_for_region_and_service": "rule:admin_required",
"identity:delete_policy_association_for_region_and_service": "rule:admin_required",
"identity:get_policy_for_endpoint": "rule:admin_required",
"identity:list_endpoints_for_policy": "rule:admin_required",
"identity:create_domain_config": "rule:admin_required",
"identity:get_domain_config": "rule:admin_required",
"identity:update_domain_config": "rule:admin_required",
"identity:delete_domain_config": "rule:admin_required",
"identity:get_domain_config_default": "rule:admin_required"
}
Identity supports a caching layer that is above the configurable subsystems,
such as token or assignment. The majority of the caching configuration options
are set in the [cache]
section. However, each section that has the
capability to be cached usually has a caching
option that will toggle
caching for that specific section. By default, caching is globally disabled.
Options are as follows:
Configuration option = Default value | Description |
---|---|
[memcache] | |
dead_retry = 300 |
(Integer) Number of seconds memcached server is considered dead before it is tried again. This is used by the key value store system (e.g. token pooled memcached persistence backend). |
pool_connection_get_timeout = 10 |
(Integer) Number of seconds that an operation will wait to get a memcache client connection. This is used by the key value store system (e.g. token pooled memcached persistence backend). |
pool_maxsize = 10 |
(Integer) Max total number of open connections to every memcached server. This is used by the key value store system (e.g. token pooled memcached persistence backend). |
pool_unused_timeout = 60 |
(Integer) Number of seconds a connection to memcached is held unused in the pool before it is closed. This is used by the key value store system (e.g. token pooled memcached persistence backend). |
Current functional back ends are:
dogpile.cache.memcached
python-memcached
library.dogpile.cache.pylibmc
pylibmc
library.dogpile.cache.bmemcached
python-binary-memcached
library.dogpile.cache.redis
dogpile.cache.dbm
dogpile.cache.memory
dogpile.cache.mongo
This chapter details the Identity service configuration options. For installation prerequisites and step-by-step walkthroughs, see the Newton Installation Tutorials and Guides for your distribution and OpenStack Administrator Guide.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The Image service has two APIs: the user-facing API, and the registry API, which is for internal requests that require access to the database.
Both of the APIs currently have two major versions: v1 (SUPPORTED) and
v2 (CURRENT). You can run either or both versions by setting
appropriate values of enable_v1_api
, enable_v2_api
,
enable_v1_registry
, and enable_v2_registry
.
If the v2 API is used, running glance-registry
is optional,
as v2 of glance-api
can connect directly to the database.
To assist you in formulating your deployment strategy for the Image APIs, the Glance team has published a statement concerning the status and development plans of the APIs: Using public Image API.
Tables of all the options used to configure the APIs, including enabling SSL and modifying WSGI settings are found below.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
admin_role = admin |
(String) Role used to identify an authenticated user as administrator.$sentinal$Provide a string value representing a Keystone role to identify an administrative user. Users with this role will be granted administrative privileges. The default value for this option is ‘admin’.$sentinal$Possible values: * A string value which is a valid Keystone role$sentinal$Related options: * None |
allow_anonymous_access = False |
(Boolean) Allow limited access to unauthenticated users.$sentinal$Assign a boolean to determine API access for unathenticated users. When set to False, the API cannot be accessed by unauthenticated users. When set to True, unauthenticated users can access the API with read-only privileges. This however only applies when using ContextMiddleware.$sentinal$Possible values: * True * False$sentinal$Related options: * None |
available_plugins = |
(List) A list of artifacts that are allowed in the format name or name-version. Empty list means that any artifact can be loaded. |
client_socket_timeout = 900 |
(Integer) Timeout for client connections’ socket operations.$sentinal$Provide a valid integer value representing time in seconds to set the period of wait before an incoming connection can be closed. The default value is 900 seconds.$sentinal$The value zero implies wait forever.$sentinal$Possible values: * Zero * Positive integer$sentinal$Related options: * None |
enable_v1_api = True |
(Boolean) Deploy the v1 OpenStack Images API.$sentinal$When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v1 OpenStack Images API.$sentinal$NOTES: * If this option is enabled, then enable_v1_registry must also be set to True to enable mandatory usage of Registry service with v1 API.$sentinal$ * If this option is disabled, then the enable_v1_registry option, which is enabled by default, is also recommended to be disabled.$sentinal$ * This option is separate from enable_v2_api , both v1 and v2 OpenStack Images API can be deployed independent of each other.$sentinal$ * If deploying only the v2 Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v1_registry * enable_v2_api |
enable_v1_registry = True |
(Boolean) Deploy the v1 API Registry service.$sentinal$When this option is set to True , the Registry service will be enabled in Glance for v1 API requests.$sentinal$NOTES: * Use of Registry is mandatory in v1 API, so this option must be set to True if the enable_v1_api option is enabled.$sentinal$ * If deploying only the v2 OpenStack Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v1_api |
enable_v2_api = True |
(Boolean) Deploy the v2 OpenStack Images API.$sentinal$When this option is set to True , Glance service will respond to requests on registered endpoints conforming to the v2 OpenStack Images API.$sentinal$NOTES: * If this option is disabled, then the enable_v2_registry option, which is enabled by default, is also recommended to be disabled.$sentinal$ * This option is separate from enable_v1_api , both v1 and v2 OpenStack Images API can be deployed independent of each other.$sentinal$ * If deploying only the v1 Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v2_registry * enable_v1_api |
enable_v2_registry = True |
(Boolean) Deploy the v2 API Registry service.$sentinal$When this option is set to True , the Registry service will be enabled in Glance for v2 API requests.$sentinal$NOTES: * Use of Registry is optional in v2 API, so this option must only be enabled if both enable_v2_api is set to True and the data_api option is set to glance.db.registry.api .$sentinal$ * If deploying only the v1 OpenStack Images API, this option, which is enabled by default, should be disabled.$sentinal$Possible values: * True * False$sentinal$Related options: * enable_v2_api * data_api |
http_keepalive = True |
(Boolean) Set keep alive option for HTTP over TCP.$sentinal$Provide a boolean value to determine sending of keep alive packets. If set to False , the server returns the header “Connection: close”. If set to True , the server returns a “Connection: Keep-Alive” in its responses. This enables retention of the same TCP connection for HTTP conversations instead of opening a new one with each new request.$sentinal$This option must be set to False if the client socket connection needs to be closed explicitly after the response is received and read successfully by the client.$sentinal$Possible values: * True * False$sentinal$Related options: * None |
image_size_cap = 1099511627776 |
(Integer) Maximum size of image a user can upload in bytes.$sentinal$An image upload greater than the size mentioned here would result in an image creation failure. This configuration option defaults to 1099511627776 bytes (1 TiB).$sentinal$NOTES: * This value should only be increased after careful consideration and must be set less than or equal to 8 EiB (9223372036854775808). * This value must be set with careful consideration of the backend storage capacity. Setting this to a very low value may result in a large number of image failures. And, setting this to a very large value may result in faster consumption of storage. Hence, this must be set according to the nature of images created and storage capacity available.$sentinal$Possible values: * Any positive number less than or equal to 9223372036854775808 |
load_enabled = True |
(Boolean) When false, no artifacts can be loaded regardless of available_plugins. When true, artifacts can be loaded. |
location_strategy = location_order |
(String) Strategy to determine the preference order of image locations.$sentinal$This configuration option indicates the strategy to determine the order in which an image’s locations must be accessed to serve the image’s data. Glance then retrieves the image data from the first responsive active location it finds in this list.$sentinal$This option takes one of two possible values location_order and store_type . The default value is location_order , which suggests that image data be served by using locations in the order they are stored in Glance. The store_type value sets the image location preference based on the order in which the storage backends are listed as a comma separated list for the configuration option store_type_preference .$sentinal$Possible values: * location_order * store_type$sentinal$Related options: * store_type_preference |
max_header_line = 16384 |
(Integer) Maximum line size of message headers.$sentinal$Provide an integer value representing a length to limit the size of message headers. The default value is 16384.$sentinal$NOTE: max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). However, it is to be kept in mind that larger values for max_header_line would flood the logs.$sentinal$Setting max_header_line to 0 sets no limit for the line size of message headers.$sentinal$Possible values: * 0 * Positive integer$sentinal$Related options: * None |
max_request_id_length = 64 |
(Integer) Limit the request ID length.$sentinal$Provide an integer value to limit the length of the request ID to the specified length. The default value is 64. Users can change this to any ineteger value between 0 and 16384 however keeping in mind that a larger value may flood the logs.$sentinal$Possible values: * Integer value between 0 and 16384$sentinal$Related options: * None |
owner_is_tenant = True |
(Boolean) Set the image owner to tenant or the authenticated user.$sentinal$Assign a boolean value to determine the owner of an image. When set to True, the owner of the image is the tenant. When set to False, the owner of the image will be the authenticated user issuing the request. Setting it to False makes the image private to the associated user and sharing with other users within the same tenant (or “project”) requires explicit image sharing via image membership.$sentinal$Possible values: * True * False$sentinal$Related options: * None |
public_endpoint = None |
(String) Public url endpoint to use for Glance/Glare versions response.$sentinal$This is the public url endpoint that will appear in the Glance/Glare “versions” response. If no value is specified, the endpoint that is displayed in the version’s response is that of the host running the API service. Change the endpoint to represent the proxy URL if the API service is running behind a proxy. If the service is running behind a load balancer, add the load balancer’s URL for this value.$sentinal$Possible values: * None * Proxy URL * Load balancer URL$sentinal$Related options: * None |
secure_proxy_ssl_header = None |
(String) DEPRECATED: The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is “HTTP_X_FORWARDED_PROTO”. Use the http_proxy_to_wsgi middleware instead. |
send_identity_headers = False |
(Boolean) Send headers received from identity when making requests to registry.$sentinal$Typically, Glance registry can be deployed in multiple flavors, which may or may not include authentication. For example, trusted-auth is a flavor that does not require the registry service to authenticate the requests it receives. However, the registry service may still need a user context to be populated to serve the requests. This can be achieved by the caller (the Glance API usually) passing through the headers it received from authenticating with identity for the same request. The typical headers sent are X-User-Id , X-Tenant-Id , X-Roles , X-Identity-Status and X-Service-Catalog .$sentinal$Provide a boolean value to determine whether to send the identity headers to provide tenant and user information along with the requests to registry service. By default, this option is set to False , which means that user and tenant information is not available readily. It must be obtained by authenticating. Hence, if this is set to False , flavor must be set to value that either includes authentication or authenticated user context.$sentinal$Possible values: * True * False$sentinal$Related options: * flavor |
show_multiple_locations = False |
(Boolean) DEPRECATED: Show all image locations when returning an image.$sentinal$This configuration option indicates whether to show all the image locations when returning image details to the user. When multiple image locations exist for an image, the locations are ordered based on the location strategy indicated by the configuration opt location_strategy . The image locations are shown under the image property locations .$sentinal$NOTES: * Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! * If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_image_direct_url MUST be set to False .$sentinal$Possible values: * True * False$sentinal$Related options: * show_image_direct_url * location_strategy This option will be removed in the Ocata release because the same functionality can be achieved with greater granularity by using policies. Please see the Newton release notes for more information. |
tcp_keepidle = 600 |
(Integer) Set the wait time before a connection recheck.$sentinal$Provide a positive integer value representing time in seconds which is set as the idle wait time before a TCP keep alive packet can be sent to the host. The default value is 600 seconds.$sentinal$Setting tcp_keepidle helps verify at regular intervals that a connection is intact and prevents frequent TCP connection reestablishment.$sentinal$Possible values: * Positive integer value representing time in seconds$sentinal$Related options: * None |
use_user_token = True |
(Boolean) DEPRECATED: Whether to pass through the user token when making requests to the registry. To prevent failures with token expiration during big files upload, it is recommended to set this parameter to False.If “use_user_token” is not in effect, then admin credentials can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support. |
[glance_store] | |
default_store = file |
(String) The default scheme to use for storing images.$sentinal$Provide a string value representing the default scheme to use for storing images. If not set, Glance uses file as the default scheme to store images with the file store.$sentinal$NOTE: The value given for this configuration option must be a valid scheme for a store registered with the stores configuration option.$sentinal$Possible values: * file * filesystem * http * https * swift * swift+http * swift+https * swift+config * rbd * sheepdog * cinder * vsphere$sentinal$Related Options: * stores |
store_capabilities_update_min_interval = 0 |
(Integer) Minimum interval in seconds to execute updating dynamic storage capabilities based on current backend status.$sentinal$Provide an integer value representing time in seconds to set the minimum interval before an update of dynamic storage capabilities for a storage backend can be attempted. Setting store_capabilities_update_min_interval does not mean updates occur periodically based on the set interval. Rather, the update is performed at the elapse of this interval set, if an operation of the store is triggered.$sentinal$By default, this option is set to zero and is disabled. Provide an integer value greater than zero to enable this option.$sentinal$NOTE: For more information on store capabilities and their updates, please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo/store-capabilities.html$sentinal$For more information on setting up a particular store in your deplyment and help with the usage of this feature, please contact the storage driver maintainers listed here: http://docs.openstack.org/developer/glance_store/drivers/index.html$sentinal$Possible values: * Zero * Positive integer$sentinal$Related Options: * None |
stores = file, http |
(List) List of enabled Glance stores.$sentinal$Register the storage backends to use for storing disk images as a comma separated list. The default stores enabled for storing disk images with Glance are file and http .$sentinal$Possible values: * A comma separated list that could include: * file * http * swift * rbd * sheepdog * cinder * vmware$sentinal$Related Options: * default_store |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[paste_deploy] | |
config_file = glance-api-paste.ini |
(String) Name of the paste configuration file.$sentinal$Provide a string value representing the name of the paste configuration file to use for configuring piplelines for server application deployments.$sentinal$NOTES: * Provide the name or the path relative to the glance directory for the paste configuration file and not the absolute path. * The sample paste configuration file shipped with Glance need not be edited in most cases as it comes with ready-made pipelines for all common deployment flavors.$sentinal$If no value is specified for this option, the paste.ini file with the prefix of the corresponding Glance service’s configuration file name will be searched for in the known configuration directories. (For example, if this option is missing from or has no value set in glance-api.conf , the service will look for a file named glance-api-paste.ini .) If the paste configuration file is not found, the service will not start.$sentinal$Possible values: * A string value representing the name of the paste configuration file.$sentinal$Related Options: * flavor |
flavor = keystone |
(String) Deployment flavor to use in the server application pipeline.$sentinal$Provide a string value representing the appropriate deployment flavor used in the server application pipleline. This is typically the partial name of a pipeline in the paste configuration file with the service name removed.$sentinal$For example, if your paste section name in the paste configuration file is [pipeline:glance-api-keystone], set flavor to keystone .$sentinal$Possible values: * String value representing a partial pipeline name.$sentinal$Related Options: * config_file |
[store_type_location_strategy] | |
store_type_preference = |
(List) Preference order of storage backends.$sentinal$Provide a comma separated list of store names in the order in which images should be retrieved from storage backends. These store names must be registered with the stores configuration option.$sentinal$NOTE: The store_type_preference configuration option is applied only if store_type is chosen as a value for the location_strategy configuration option. An empty list will not change the location order.$sentinal$Possible values: * Empty list * Comma separated list of registered store names. Legal values are: * file * http * rbd * swift * sheepdog * cinder * vmware$sentinal$Related options: * location_strategy * stores |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
ca_file = /etc/ssl/cafile |
(String) Absolute path to the CA file.$sentinal$Provide a string value representing a valid absolute path to the Certificate Authority file to use for client authentication.$sentinal$A CA file typically contains necessary trusted certificates to use for the client authentication. This is essential to ensure that a secure connection is established to the server via the internet.$sentinal$Possible values: * Valid absolute path to the CA file$sentinal$Related options: * None |
cert_file = /etc/ssl/certs |
(String) Absolute path to the certificate file.$sentinal$Provide a string value representing a valid absolute path to the certificate file which is required to start the API service securely.$sentinal$A certificate file typically is a public key container and includes the server’s public key, server name, server information and the signature which was a result of the verification process using the CA certificate. This is required for a secure connection establishment.$sentinal$Possible values: * Valid absolute path to the certificate file$sentinal$Related options: * None |
key_file = /etc/ssl/key/key-file.pem |
(String) Absolute path to a private key file.$sentinal$Provide a string value representing a valid absolute path to a private key file which is required to establish the client-server connection.$sentinal$Possible values: * Absolute path to the private key file$sentinal$Related options: * None |
The Image service supports several back ends for storing virtual machine images:
Note
You must use only raw
image formats with the Ceph RBD back end.
The following tables detail the options available for each.
Configuration option = Default value | Description |
---|---|
[glance_store] | |
cinder_api_insecure = False |
(Boolean) Allow to perform insecure SSL requests to cinder.$sentinal$If this option is set to True, HTTPS endpoint connection is verified using the CA certificates file specified by cinder_ca_certificates_file option.$sentinal$Possible values: * True * False$sentinal$Related options: * cinder_ca_certificates_file |
cinder_ca_certificates_file = None |
(String) Location of a CA certificates file used for cinder client requests.$sentinal$The specified CA certificates file, if set, is used to verify cinder connections via HTTPS endpoint. If the endpoint is HTTP, this value is ignored. cinder_api_insecure must be set to True to enable the verification.$sentinal$Possible values: * Path to a ca certificates file$sentinal$Related options: * cinder_api_insecure |
cinder_catalog_info = volumev2::publicURL |
(String) Information to match when looking for cinder in the service catalog.$sentinal$When the cinder_endpoint_template is not set and any of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , cinder_store_password is not set, cinder store uses this information to lookup cinder endpoint from the service catalog in the current context. cinder_os_region_name , if set, is taken into consideration to fetch the appropriate endpoint.$sentinal$The service catalog can be listed by the openstack catalog list command.$sentinal$Possible values: * A string of of the following form: <service_type>:<service_name>:<endpoint_type> At least service_type and endpoint_type should be specified. service_name can be omitted.$sentinal$Related options: * cinder_os_region_name * cinder_endpoint_template * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password |
cinder_endpoint_template = None |
(String) Override service catalog lookup with template for cinder endpoint.$sentinal$When this option is set, this value is used to generate cinder endpoint, instead of looking up from the service catalog. This value is ignored if cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password are specified.$sentinal$If this configuration option is set, cinder_catalog_info will be ignored.$sentinal$Possible values: * URL template string for cinder endpoint, where %%(tenant)s is replaced with the current tenant (project) name. For example: http://cinder.openstack.example.org/v2/%%(tenant)s$sentinal$ Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name * cinder_store_password * cinder_catalog_info |
cinder_http_retries = 3 |
(Integer) Number of cinderclient retries on failed http calls.$sentinal$When a call failed by any errors, cinderclient will retry the call up to the specified times after sleeping a few seconds.$sentinal$Possible values: * A positive integer$sentinal$Related options: * None |
cinder_os_region_name = None |
(String) Region name to lookup cinder service from the service catalog.$sentinal$This is used only when cinder_catalog_info is used for determining the endpoint. If set, the lookup for cinder endpoint by this node is filtered to the specified region. It is useful when multiple regions are listed in the catalog. If this is not set, the endpoint is looked up from every region.$sentinal$Possible values: * A string that is a valid region name.$sentinal$Related options: * cinder_catalog_info |
cinder_state_transition_timeout = 300 |
(Integer) Time period, in seconds, to wait for a cinder volume transition to complete.$sentinal$When the cinder volume is created, deleted, or attached to the glance node to read/write the volume data, the volume’s state is changed. For example, the newly created volume status changes from creating to available after the creation process is completed. This specifies the maximum time to wait for the status change. If a timeout occurs while waiting, or the status is changed to an unexpected value (e.g. error`), the image creation fails.$sentinal$Possible values: * A positive integer$sentinal$Related options: * None |
cinder_store_auth_address = None |
(String) The address where the cinder authentication service is listening.$sentinal$When all of cinder_store_auth_address , cinder_store_user_name , cinder_store_project_name , and cinder_store_password options are specified, the specified values are always used for the authentication. This is useful to hide the image volumes from users by storing them in a project/tenant specific to the image service. It also enables users to share the image volume among other projects under the control of glance’s ACL.$sentinal$If either of these options are not set, the cinder endpoint is looked up from the service catalog, and current context’s user and project are used.$sentinal$Possible values: * A valid authentication service address, for example: http://openstack.example.org/identity/v2.0 $sentinal$ Related options: * cinder_store_user_name * cinder_store_password * cinder_store_project_name |
cinder_store_password = None |
(String) Password for the user authenticating against cinder.$sentinal$This must be used with all the following related options. If any of these are not specified, the user of the current context is used.$sentinal$Possible values: * A valid password for the user specified by cinder_store_user_name$sentinal$ Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_project_name |
cinder_store_project_name = None |
(String) Project name where the image volume is stored in cinder.$sentinal$If this configuration option is not set, the project in current context is used.$sentinal$This must be used with all the following related options. If any of these are not specified, the project of the current context is used.$sentinal$Possible values: * A valid project name$sentinal$Related options: * cinder_store_auth_address * cinder_store_user_name * cinder_store_password |
cinder_store_user_name = None |
(String) User name to authenticate against cinder.$sentinal$This must be used with all the following related options. If any of these are not specified, the user of the current context is used.$sentinal$Possible values: * A valid user name$sentinal$Related options: * cinder_store_auth_address * cinder_store_password * cinder_store_project_name |
Configuration option = Default value | Description |
---|---|
[glance_store] | |
filesystem_store_datadir = /var/lib/glance/images |
(String) Directory to which the filesystem backend store writes images.$sentinal$Upon start up, Glance creates the directory if it doesn’t already exist and verifies write access to the user under which glance-api runs. If the write access isn’t available, a BadStoreConfiguration exception is raised and the filesystem store may not be available for adding new images.$sentinal$NOTE: This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images.$sentinal$Possible values: * A valid path to a directory$sentinal$Related options: * filesystem_store_datadirs * filesystem_store_file_perm |
filesystem_store_datadirs = None |
(Multi-valued) List of directories and their priorities to which the filesystem backend store writes images.$sentinal$The filesystem store can be configured to store images in multiple directories as opposed to using a single directory specified by the filesystem_store_datadir configuration option. When using multiple directories, each directory can be given an optional priority to specify the preference order in which they should be used. Priority is an integer that is concatenated to the directory path with a colon where a higher value indicates higher priority. When two directories have the same priority, the directory with most free space is used. When no priority is specified, it defaults to zero.$sentinal$More information on configuring filesystem store with multiple store directories can be found at http://docs.openstack.org/developer/glance/configuring.html$sentinal$NOTE: This directory is used only when filesystem store is used as a storage backend. Either filesystem_store_datadir or filesystem_store_datadirs option must be specified in glance-api.conf . If both options are specified, a BadStoreConfiguration will be raised and the filesystem store may not be available for adding new images.$sentinal$Possible values: * List of strings of the following form: * <a valid directory path>:<optional integer priority>``$sentinal$Related options: * ``filesystem_store_datadir * filesystem_store_file_perm |
filesystem_store_file_perm = 0 |
(Integer) File access permissions for the image files.$sentinal$Set the intended file access permissions for image data. This provides a way to enable other services, e.g. Nova, to consume images directly from the filesystem store. The users running the services that are intended to be given access to could be made a member of the group that owns the files created. Assigning a value less then or equal to zero for this configuration option signifies that no changes be made to the default permissions. This value will be decoded as an octal digit.$sentinal$For more information, please refer the documentation at http://docs.openstack.org/developer/glance/configuring.html$sentinal$Possible values: * A valid file access permission * Zero * Any negative integer$sentinal$Related options: * None |
filesystem_store_metadata_file = None |
(String) Filesystem store metadata file.$sentinal$The path to a file which contains the metadata to be returned with any location associated with the filesystem store. The file must contain a valid JSON object. The object should contain the keys id and mountpoint . The value for both keys should be a string.$sentinal$Possible values: * A valid path to the store metadata file$sentinal$Related options: * None |
Configuration option = Default value | Description |
---|---|
[glance_store] | |
http_proxy_information = {} |
(Dict) The http/https proxy information to be used to connect to the remote server.$sentinal$This configuration option specifies the http/https proxy information that should be used to connect to the remote server. The proxy information should be a key value pair of the scheme and proxy, for example, http:10.0.0.1:3128. You can also specify proxies for multiple schemes by separating the key value pairs with a comma, for example, http:10.0.0.1:3128, https:10.0.0.1:1080.$sentinal$Possible values: * A comma separated list of scheme:proxy pairs as described above$sentinal$Related options: * None |
https_ca_certificates_file = None |
(String) Path to the CA bundle file.$sentinal$This configuration option enables the operator to use a custom Certificate Authority file to verify the remote server certificate. If this option is set, the https_insecure option will be ignored and the CA file specified will be used to authenticate the server certificate and establish a secure connection to the server.$sentinal$Possible values: * A valid path to a CA file$sentinal$Related options: * https_insecure |
https_insecure = True |
(Boolean) Set verification of the remote server certificate.$sentinal$This configuration option takes in a boolean value to determine whether or not to verify the remote server certificate. If set to True, the remote server certificate is not verified. If the option is set to False, then the default CA truststore is used for verification.$sentinal$This option is ignored if https_ca_certificates_file is set. The remote server certificate will then be verified using the file specified using the https_ca_certificates_file option.$sentinal$Possible values: * True * False$sentinal$Related options: * https_ca_certificates_file |
Configuration option = Default value | Description |
---|---|
[glance_store] | |
rados_connect_timeout = 0 |
(Integer) Timeout value for connecting to Ceph cluster.$sentinal$This configuration option takes in the timeout value in seconds used when connecting to the Ceph cluster i.e. it sets the time to wait for glance-api before closing the connection. This prevents glance-api hangups during the connection to RBD. If the value for this option is set to less than or equal to 0, no timeout is set and the default librados value is used.$sentinal$Possible Values: * Any integer value$sentinal$Related options: * None |
rbd_store_ceph_conf = /etc/ceph/ceph.conf |
(String) Ceph configuration file path.$sentinal$This configuration option takes in the path to the Ceph configuration file to be used. If the value for this option is not set by the user or is set to None, librados will locate the default configuration file which is located at /etc/ceph/ceph.conf. If using Cephx authentication, this file should include a reference to the right keyring in a client.<USER> section$sentinal$Possible Values: * A valid path to a configuration file$sentinal$Related options: * rbd_store_user |
rbd_store_chunk_size = 8 |
(Integer) Size, in megabytes, to chunk RADOS images into.$sentinal$Provide an integer value representing the size in megabytes to chunk Glance images into. The default chunk size is 8 megabytes. For optimal performance, the value should be a power of two.$sentinal$When Ceph’s RBD object storage system is used as the storage backend for storing Glance images, the images are chunked into objects of the size set using this option. These chunked objects are then stored across the distributed block data store to use for Glance.$sentinal$Possible Values: * Any positive integer value$sentinal$Related options: * None |
rbd_store_pool = images |
(String) RADOS pool in which images are stored.$sentinal$When RBD is used as the storage backend for storing Glance images, the images are stored by means of logical grouping of the objects (chunks of images) into a pool . Each pool is defined with the number of placement groups it can contain. The default pool that is used is ‘images’.$sentinal$More information on the RBD storage backend can be found here: http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/$sentinal$Possible Values: * A valid pool name$sentinal$Related options: * None |
rbd_store_user = None |
(String) RADOS user to authenticate as.$sentinal$This configuration option takes in the RADOS user to authenticate as. This is only needed when RADOS authentication is enabled and is applicable only if the user is using Cephx authentication. If the value for this option is not set by the user or is set to None, a default value will be chosen, which will be based on the client. section in rbd_store_ceph_conf.$sentinal$Possible Values: * A valid RADOS user$sentinal$Related options: * rbd_store_ceph_conf |
Configuration option = Default value | Description |
---|---|
[glance_store] | |
sheepdog_store_address = 127.0.0.1 |
(String) Address to bind the Sheepdog daemon to.$sentinal$Provide a string value representing the address to bind the Sheepdog daemon to. The default address set for the ‘sheep’ is 127.0.0.1.$sentinal$The Sheepdog daemon, also called ‘sheep’, manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages directed to the address set using sheepdog_store_address option to store chunks of Glance images.$sentinal$Possible values: * A valid IPv4 address * A valid IPv6 address * A valid hostname$sentinal$Related Options: * sheepdog_store_port |
sheepdog_store_chunk_size = 64 |
(Integer) Chunk size for images to be stored in Sheepdog data store.$sentinal$Provide an integer value representing the size in mebibyte (1048576 bytes) to chunk Glance images into. The default chunk size is 64 mebibytes.$sentinal$When using Sheepdog distributed storage system, the images are chunked into objects of this size and then stored across the distributed data store to use for Glance.$sentinal$Chunk sizes, if a power of two, help avoid fragmentation and enable improved performance.$sentinal$Possible values: * Positive integer value representing size in mebibytes.$sentinal$Related Options: * None |
sheepdog_store_port = 7000 |
(Port number) Port number on which the sheep daemon will listen.$sentinal$Provide an integer value representing a valid port number on which you want the Sheepdog daemon to listen on. The default port is 7000.$sentinal$The Sheepdog daemon, also called ‘sheep’, manages the storage in the distributed cluster by writing objects across the storage network. It identifies and acts on the messages it receives on the port number set using sheepdog_store_port option to store chunks of Glance images.$sentinal$Possible values: * A valid port number (0 to 65535)$sentinal$Related Options: * sheepdog_store_address |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
default_swift_reference = ref1 |
(String) Reference to default Swift account/backing store parameters.$sentinal$Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ‘ref1’. This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added.$sentinal$Possible values: * A valid string value$sentinal$Related options: * None |
swift_store_auth_address = None |
(String) The address where the Swift authentication service is listening. |
swift_store_config_file = None |
(String) File containing the swift account(s) configurations.$sentinal$Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is diabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it helps avoid storage of credentials in the database.$sentinal$Possible values: * None * String value representing a vaid configuration file path$sentinal$Related options: * None |
swift_store_key = None |
(String) Auth key for the user authenticating against the Swift authentication service. |
swift_store_user = None |
(String) The user to authenticate against the Swift authentication service. |
[glance_store] | |
default_swift_reference = ref1 |
(String) Reference to default Swift account/backing store parameters.$sentinal$Provide a string value representing a reference to the default set of parameters required for using swift account/backing store for image storage. The default reference value for this configuration option is ‘ref1’. This configuration option dereferences the parameters and facilitates image storage in Swift storage backend every time a new image is added.$sentinal$Possible values: * A valid string value$sentinal$Related options: * None |
swift_store_admin_tenants = |
(List) List of tenants that will be granted admin access.$sentinal$This is a list of tenants that will be granted read/write access on all Swift containers created by Glance in multi-tenant mode. The default value is an empty list.$sentinal$Possible values: * A comma separated list of strings representing UUIDs of Keystone projects/tenants$sentinal$Related options: * None |
swift_store_auth_address = None |
(String) DEPRECATED: The address where the Swift authentication service is listening. The option ‘auth_address’ in the Swift back-end configuration file is used instead. |
swift_store_auth_insecure = False |
(Boolean) Set verification of the server certificate.$sentinal$This boolean determines whether or not to verify the server certificate. If this option is set to True, swiftclient won’t check for a valid SSL certificate when authenticating. If the option is set to False, then the default CA truststore is used for verification.$sentinal$Possible values: * True * False$sentinal$Related options: * swift_store_cacert |
swift_store_auth_version = 2 |
(String) DEPRECATED: Version of the authentication service to use. Valid versions are 2 and 3 for keystone and 1 (deprecated) for swauth and rackspace. The option ‘auth_version’ in the Swift back-end configuration file is used instead. |
swift_store_cacert = /etc/ssl/certs/ca-certificates.crt |
(String) Path to the CA bundle file.$sentinal$This configuration option enables the operator to specify the path to a custom Certificate Authority file for SSL verification when connecting to Swift.$sentinal$Possible values: * A valid path to a CA file$sentinal$Related options: * swift_store_auth_insecure |
swift_store_config_file = None |
(String) Absolute path to the file containing the swift account(s) configurations.$sentinal$Include a string value representing the path to a configuration file that has references for each of the configured Swift account(s)/backing stores. By default, no file path is specified and customized Swift referencing is disabled. Configuring this option is highly recommended while using Swift storage backend for image storage as it avoids storage of credentials in the database.$sentinal$Possible values: * String value representing an absolute path on the glance-api node$sentinal$Related options: * None |
swift_store_container = glance |
(String) Name of single container to store images/name prefix for multiple containers$sentinal$When a single container is being used to store images, this configuration option indicates the container within the Glance account to be used for storing all images. When multiple containers are used to store images, this will be the name prefix for all containers. Usage of single/multiple containers can be controlled using the configuration option swift_store_multiple_containers_seed .$sentinal$When using multiple containers, the containers will be named after the value set for this configuration option with the first N chars of the image UUID as the suffix delimited by an underscore (where N is specified by swift_store_multiple_containers_seed ).$sentinal$Example: if the seed is set to 3 and swift_store_container = glance , then an image with UUID fdae39a1-bac5-4238-aba4-69bcc726e848 would be placed in the container glance_fda . All dashes in the UUID are included when creating the container name but do not count toward the character limit, so when N=10 the container name would be glance_fdae39a1-ba.``$sentinal$Possible values: * If using single container, this configuration option can be any string that is a valid swift container name in Glance's Swift account * If using multiple containers, this configuration option can be any string as long as it satisfies the container naming rules enforced by Swift. The value of ``swift_store_multiple_containers_seed should be taken into account as well.$sentinal$Related options: * swift_store_multiple_containers_seed * swift_store_multi_tenant * swift_store_create_container_on_put |
swift_store_create_container_on_put = False |
(Boolean) Create container, if it doesn’t already exist, when uploading image.$sentinal$At the time of uploading an image, if the corresponding container doesn’t exist, it will be created provided this configuration option is set to True. By default, it won’t be created. This behavior is applicable for both single and multiple containers mode.$sentinal$Possible values: * True * False$sentinal$Related options: * None |
swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name |
(String) The URL endpoint to use for Swift backend storage.$sentinal$Provide a string value representing the URL endpoint to use for storing Glance images in Swift store. By default, an endpoint is not set and the storage URL returned by auth is used. Setting an endpoint with swift_store_endpoint overrides the storage URL and is used for Glance image storage.$sentinal$NOTE: The URL should include the path up to, but excluding the container. The location of an object is obtained by appending the container and object to the configured URL.$sentinal$Possible values: * String value representing a valid URL path up to a Swift container$sentinal$Related Options: * None |
swift_store_endpoint_type = publicURL |
(String) Endpoint Type of Swift service.$sentinal$This string value indicates the endpoint type to use to fetch the Swift endpoint. The endpoint type determines the actions the user will be allowed to perform, for instance, reading and writing to the Store. This setting is only used if swift_store_auth_version is greater than 1.$sentinal$Possible values: * publicURL * adminURL * internalURL$sentinal$Related options: * swift_store_endpoint |
swift_store_expire_soon_interval = 60 |
(Integer) Time in seconds defining the size of the window in which a new token may be requested before the current token is due to expire.$sentinal$Typically, the Swift storage driver fetches a new token upon the expiration of the current token to ensure continued access to Swift. However, some Swift transactions (like uploading image segments) may not recover well if the token expires on the fly.$sentinal$Hence, by fetching a new token before the current token expiration, we make sure that the token does not expire or is close to expiry before a transaction is attempted. By default, the Swift storage driver requests for a new token 60 seconds or less before the current token expiration.$sentinal$Possible values: * Zero * Positive integer value$sentinal$Related Options: * None |
swift_store_key = None |
(String) DEPRECATED: Auth key for the user authenticating against the Swift authentication service. The option ‘key’ in the Swift back-end configuration file is used to set the authentication key instead. |
swift_store_large_object_chunk_size = 200 |
(Integer) The maximum size, in MB, of the segments when image data is segmented.$sentinal$When image data is segmented to upload images that are larger than the limit enforced by the Swift cluster, image data is broken into segments that are no bigger than the size specified by this configuration option. Refer to swift_store_large_object_size for more detail.$sentinal$For example: if swift_store_large_object_size is 5GB and swift_store_large_object_chunk_size is 1GB, an image of size 6.2GB will be segmented into 7 segments where the first six segments will be 1GB in size and the seventh segment will be 0.2GB.$sentinal$Possible values: * A positive integer that is less than or equal to the large object limit enforced by Swift cluster in consideration.$sentinal$Related options: * swift_store_large_object_size |
swift_store_large_object_size = 5120 |
(Integer) The size threshold, in MB, after which Glance will start segmenting image data.$sentinal$Swift has an upper limit on the size of a single uploaded object. By default, this is 5GB. To upload objects bigger than this limit, objects are segmented into multiple smaller objects that are tied together with a manifest file. For more detail, refer to http://docs.openstack.org/developer/swift/overview_large_objects.html$sentinal$This configuration option specifies the size threshold over which the Swift driver will start segmenting image data into multiple smaller files. Currently, the Swift driver only supports creating Dynamic Large Objects.$sentinal$NOTE: This should be set by taking into account the large object limit enforced by the Swift cluster in consideration.$sentinal$Possible values: * A positive integer that is less than or equal to the large object limit enforced by the Swift cluster in consideration.$sentinal$Related options: * swift_store_large_object_chunk_size |
swift_store_multi_tenant = False |
(Boolean) Store images in tenant’s Swift account.$sentinal$This enables multi-tenant storage mode which causes Glance images to be stored in tenant specific Swift accounts. If this is disabled, Glance stores all images in its own account. More details multi-tenant store can be found at https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage$sentinal$Possible values: * True * False$sentinal$Related options: * None |
swift_store_multiple_containers_seed = 0 |
(Integer) Seed indicating the number of containers to use for storing images.$sentinal$When using a single-tenant store, images can be stored in one or more than one containers. When set to 0, all images will be stored in one single container. When set to an integer value between 1 and 32, multiple containers will be used to store images. This configuration option will determine how many containers are created. The total number of containers that will be used is equal to 16^N, so if this config option is set to 2, then 16^2=256 containers will be used to store images.$sentinal$Please refer to swift_store_container for more detail on the naming convention. More detail about using multiple containers can be found at https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-multiple-containers.html$sentinal$NOTE: This is used only when swift_store_multi_tenant is disabled.$sentinal$Possible values: * A non-negative integer less than or equal to 32$sentinal$Related options: * swift_store_container * swift_store_multi_tenant * swift_store_create_container_on_put |
swift_store_region = RegionTwo |
(String) The region of Swift endpoint to use by Glance.$sentinal$Provide a string value representing a Swift region where Glance can connect to for image storage. By default, there is no region set.$sentinal$When Glance uses Swift as the storage backend to store images for a specific tenant that has multiple endpoints, setting of a Swift region with swift_store_region allows Glance to connect to Swift in the specified region as opposed to a single region connectivity.$sentinal$This option can be configured for both single-tenant and multi-tenant storage.$sentinal$NOTE: Setting the region with swift_store_region is tenant-specific and is necessary only if the tenant has multiple endpoints across different regions.$sentinal$Possible values: * A string value representing a valid Swift region.$sentinal$Related Options: * None |
swift_store_retry_get_count = 0 |
(Integer) The number of times a Swift download will be retried before the request fails.$sentinal$Provide an integer value representing the number of times an image download must be retried before erroring out. The default value is zero (no retry on a failed image download). When set to a positive integer value, swift_store_retry_get_count ensures that the download is attempted this many more times upon a download failure before sending an error message.$sentinal$Possible values: * Zero * Positive integer value$sentinal$Related Options: * None |
swift_store_service_type = object-store |
(String) Type of Swift service to use.$sentinal$Provide a string value representing the service type to use for storing images while using Swift backend storage. The default service type is set to object-store .$sentinal$NOTE: If swift_store_auth_version is set to 2, the value for this configuration option needs to be object-store . If using a higher version of Keystone or a different auth scheme, this option may be modified.$sentinal$Possible values: * A string representing a valid service type for Swift storage.$sentinal$Related Options: * None |
swift_store_ssl_compression = True |
(Boolean) SSL layer compression for HTTPS Swift requests.$sentinal$Provide a boolean value to determine whether or not to compress HTTPS Swift requests for images at the SSL layer. By default, compression is enabled.$sentinal$When using Swift as the backend store for Glance image storage, SSL layer compression of HTTPS Swift requests can be set using this option. If set to False, SSL layer compression of HTTPS Swift requests is disabled. Disabling this option may improve performance for images which are already in a compressed format, for example, qcow2.$sentinal$Possible values: * True * False$sentinal$Related Options: * None |
swift_store_use_trusts = True |
(Boolean) Use trusts for multi-tenant Swift store.$sentinal$This option instructs the Swift store to create a trust for each add/get request when the multi-tenant store is in use. Using trusts allows the Swift store to avoid problems that can be caused by an authentication token expiring during the upload or download of data.$sentinal$By default, swift_store_use_trusts is set to True``(use of trusts is enabled). If set to ``False , a user token is used for the Swift connection instead, eliminating the overhead of trust creation.$sentinal$NOTE: This option is considered only when swift_store_multi_tenant is set to True $sentinal$Possible values: * True * False$sentinal$Related options: * swift_store_multi_tenant |
swift_store_user = None |
(String) DEPRECATED: The user to authenticate against the Swift authentication service. The option ‘user’ in the Swift back-end configuration file is set instead. |
To use vCenter data stores for the Image service back end, you must
update the glance-api.conf
file, as follows:
Add data store parameters to the VMware Datastore Store Options
section.
Specify vSphere as the back end.
Note
You must configure any configured Image service data stores for the Compute service.
You can specify vCenter data stores directly by using the data store name or Storage Policy Based Management (SPBM), which requires vCenter Server 5.5 or later. For details, see Configure vCenter data stores for the back end.
Note
If you intend to use multiple data stores for the back end, use the SPBM feature.
In the glance_store
section, set the stores
and default_store
options to vsphere
, as shown in this code sample:
[glance_store]
# List of stores enabled. Valid stores are: cinder, file, http, rbd,
# sheepdog, swift, vsphere (list value)
stores = file,http,vsphere
# Which back end scheme should Glance use by default is not specified
# in a request to add a new image to Glance? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
default_store = vsphere
The following table describes the parameters in the
VMware Datastore Store Options
section:
Configuration option = Default value | Description |
---|---|
[glance_store] | |
vmware_api_retry_count = 10 |
(Integer) The number of VMware API retries.$sentinal$This configuration option specifies the number of times the VMware ESX/VC server API must be retried upon connection related issues or server API call overload. It is not possible to specify ‘retry forever’.$sentinal$Possible Values: * Any positive integer value$sentinal$Related options: * None |
vmware_ca_file = /etc/ssl/certs/ca-certificates.crt |
(String) Absolute path to the CA bundle file.$sentinal$This configuration option enables the operator to use a custom Cerificate Authority File to verify the ESX/vCenter certificate.$sentinal$If this option is set, the “vmware_insecure” option will be ignored and the CA file specified will be used to authenticate the ESX/vCenter server certificate and establish a secure connection to the server.$sentinal$Possible Values: * Any string that is a valid absolute path to a CA file$sentinal$Related options: * vmware_insecure |
vmware_datastores = None |
(Multi-valued) The datastores where the image can be stored.$sentinal$This configuration option specifies the datastores where the image can be stored in the VMWare store backend. This option may be specified multiple times for specifying multiple datastores. The datastore name should be specified after its datacenter path, separated by ”:”. An optional weight may be given after the datastore name, separated again by ”:” to specify the priority. Thus, the required format becomes <datacenter_path>:<datastore_name>:<optional_weight>.$sentinal$When adding an image, the datastore with highest weight will be selected, unless there is not enough free space available in cases where the image size is already known. If no weight is given, it is assumed to be zero and the directory will be considered for selection last. If multiple datastores have the same weight, then the one with the most free space available is selected.$sentinal$Possible Values: * Any string of the format: <datacenter_path>:<datastore_name>:<optional_weight>$sentinal$Related options: * None |
vmware_insecure = False |
(Boolean) Set verification of the ESX/vCenter server certificate.$sentinal$This configuration option takes a boolean value to determine whether or not to verify the ESX/vCenter server certificate. If this option is set to True, the ESX/vCenter server certificate is not verified. If this option is set to False, then the default CA truststore is used for verification.$sentinal$This option is ignored if the “vmware_ca_file” option is set. In that case, the ESX/vCenter server certificate will then be verified using the file specified using the “vmware_ca_file” option .$sentinal$Possible Values: * True * False$sentinal$Related options: * vmware_ca_file |
vmware_server_host = 127.0.0.1 |
(String) Address of the ESX/ESXi or vCenter Server target system.$sentinal$This configuration option sets the address of the ESX/ESXi or vCenter Server target system. This option is required when using the VMware storage backend. The address can contain an IP address (127.0.0.1) or a DNS name (www.my-domain.com).$sentinal$Possible Values: * A valid IPv4 or IPv6 address * A valid DNS name$sentinal$Related options: * vmware_server_username * vmware_server_password |
vmware_server_password = vmware |
(String) Server password.$sentinal$This configuration option takes the password for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend.$sentinal$Possible Values: * Any string that is a password corresponding to the username specified using the “vmware_server_username” option$sentinal$Related options: * vmware_server_host * vmware_server_username |
vmware_server_username = root |
(String) Server username.$sentinal$This configuration option takes the username for authenticating with the VMware ESX/ESXi or vCenter Server. This option is required when using the VMware storage backend.$sentinal$Possible Values: * Any string that is the username for a user with appropriate privileges$sentinal$Related options: * vmware_server_host * vmware_server_password |
vmware_store_image_dir = /openstack_glance |
(String) The directory where the glance images will be stored in the datastore.$sentinal$This configuration option specifies the path to the directory where the glance images will be stored in the VMware datastore. If this option is not set, the default directory where the glance images are stored is openstack_glance.$sentinal$Possible Values: * Any string that is a valid path to a directory$sentinal$Related options: * None |
vmware_task_poll_interval = 5 |
(Integer) Interval in seconds used for polling remote tasks invoked on VMware ESX/VC server.$sentinal$This configuration option takes in the sleep time in seconds for polling an on-going async task as part of the VMWare ESX/VC server API call.$sentinal$Possible Values: * Any positive integer value$sentinal$Related options: * None |
The following block of text shows a sample configuration:
# ============ VMware Datastore Store Options =====================
# ESX/ESXi or vCenter Server target system.
# The server value can be an IP address or a DNS name
# e.g. 127.0.0.1, 127.0.0.1:443, www.vmware-infra.com
vmware_server_host = 192.168.0.10
# Server username (string value)
vmware_server_username = ADMINISTRATOR
# Server password (string value)
vmware_server_password = password
# Inventory path to a datacenter (string value)
# Value optional when vmware_server_ip is an ESX/ESXi host: if specified
# should be `ha-datacenter`.
vmware_datacenter_path = DATACENTER
# Datastore associated with the datacenter (string value)
vmware_datastore_name = datastore1
# PBM service WSDL file location URL. e.g.
# file:///opt/SDK/spbm/wsdl/pbmService.wsdl Not setting this
# will disable storage policy based placement of images.
# (string value)
#vmware_pbm_wsdl_location =
# The PBM policy. If `pbm_wsdl_location` is set, a PBM policy needs
# to be specified. This policy will be used to select the datastore
# in which the images will be stored.
#vmware_pbm_policy =
# The interval used for polling remote tasks
# invoked on VMware ESX/VC server in seconds (integer value)
vmware_task_poll_interval = 5
# Absolute path of the folder containing the images in the datastore
# (string value)
vmware_store_image_dir = /openstack_glance
# Allow to perform insecure SSL requests to the target system (boolean value)
vmware_api_insecure = False
You can specify a vCenter data store for the back end by setting the
vmware_datastore_name
parameter value to the vCenter name of
the data store. This configuration limits the back end to a single
data store.
If present, comment or delete the vmware_pbm_wsdl_location
and
vmware_pbm_policy
parameters.
Uncomment and define the vmware_datastore_name
parameter with the
name of the vCenter data store.
Complete the other vCenter configuration parameters as appropriate.
You can modify many options in the Image service. The following tables provide a comprehensive list.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allow_additional_image_properties = True |
(Boolean) Allow users to add additional/custom properties to images.$sentinal$Glance defines a standard set of properties (in its schema) that appear on every image. These properties are also known as base properties . In addition to these properties, Glance allows users to add custom properties to images. These are known as additional properties .$sentinal$By default, this configuration option is set to True and users are allowed to add additional properties. The number of additional properties that can be added to an image can be controlled via image_property_quota configuration option.$sentinal$Possible values: * True * False$sentinal$Related options: * image_property_quota |
api_limit_max = 1000 |
(Integer) Maximum number of results that could be returned by a request.$sentinal$As described in the help text of limit_param_default , some requests may return multiple results. The number of results to be returned are governed either by the limit parameter in the request or the limit_param_default configuration option. The value in either case, can’t be greater than the absolute maximum defined by this configuration option. Anything greater than this value is trimmed down to the maximum value defined here.$sentinal$NOTE: Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience.$sentinal$Possible values: * Any positive integer$sentinal$Related options: * limit_param_default |
backlog = 4096 |
(Integer) Set the number of incoming connection requests.$sentinal$Provide a positive integer value to limit the number of requests in the backlog queue. The default queue size is 4096.$sentinal$An incoming connection to a TCP listener socket is queued before a connection can be established with the server. Setting the backlog for a TCP socket ensures a limited queue size for incoming traffic.$sentinal$Possible values: * Positive integer$sentinal$Related options: * None |
bind_host = 0.0.0.0 |
(String) IP address to bind the glance servers to.$sentinal$Provide an IP address to bind the glance server to. The default value is 0.0.0.0 .$sentinal$Edit this option to enable the server to listen on one particular IP address on the network card. This facilitates selection of a particular network interface for the server.$sentinal$Possible values: * A valid IPv4 address * A valid IPv6 address$sentinal$Related options: * None |
bind_port = None |
(Port number) Port number on which the server will listen.$sentinal$Provide a valid port number to bind the server’s socket to. This port is then set to identify processes and forward network messages that arrive at the server. The default bind_port value for the API server is 9292 and for the registry server is 9191.$sentinal$Possible values: * A valid port number (0 to 65535)$sentinal$Related options: * None |
data_api = glance.db.sqlalchemy.api |
(String) Python module path of data access API.$sentinal$Specifies the path to the API to use for accessing the data model. This option determines how the image catalog data will be accessed.$sentinal$Possible values: * glance.db.sqlalchemy.api * glance.db.registry.api * glance.db.simple.api$sentinal$If this option is set to glance.db.sqlalchemy.api then the image catalog data is stored in and read from the database via the SQLAlchemy Core and ORM APIs.$sentinal$Setting this option to glance.db.registry.api will force all database access requests to be routed through the Registry service. This avoids data access from the Glance API nodes for an added layer of security, scalability and manageability.$sentinal$NOTE: In v2 OpenStack Images API, the registry service is optional. In order to use the Registry API in v2, the option enable_v2_registry must be set to True .$sentinal$Finally, when this configuration option is set to glance.db.simple.api , image catalog data is stored in and read from an in-memory data structure. This is primarily used for testing.$sentinal$Related options: * enable_v2_api * enable_v2_registry |
digest_algorithm = sha256 |
(String) Digest algorithm to use for digital signature.$sentinal$Provide a string value representing the digest algorithm to use for generating digital signatures. By default, sha256 is used.$sentinal$To get a list of the available algorithms supported by the version of OpenSSL on your platform, run the command: openssl list-message-digest-algorithms . Examples are ‘sha1’, ‘sha256’, and ‘sha512’.$sentinal$NOTE: digest_algorithm is not related to Glance’s image signing and verification. It is only used to sign the universally unique identifier (UUID) as a part of the certificate file and key file validation.$sentinal$Possible values: * An OpenSSL message digest algorithm identifier$sentinal$Relation options: * None |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
image_location_quota = 10 |
(Integer) Maximum number of locations allowed on an image.$sentinal$Any negative value is interpreted as unlimited.$sentinal$Related options: * None |
image_member_quota = 128 |
(Integer) Maximum number of image members per image.$sentinal$This limits the maximum of users an image can be shared with. Any negative value is interpreted as unlimited.$sentinal$Related options: * None |
image_property_quota = 128 |
(Integer) Maximum number of properties allowed on an image.$sentinal$This enforces an upper limit on the number of additional properties an image can have. Any negative value is interpreted as unlimited.$sentinal$NOTE: This won’t have any impact if additional properties are disabled. Please refer to allow_additional_image_properties .$sentinal$Related options: * allow_additional_image_properties |
image_tag_quota = 128 |
(Integer) Maximum number of tags allowed on an image.$sentinal$Any negative value is interpreted as unlimited.$sentinal$Related options: * None |
limit_param_default = 25 |
(Integer) The default number of results to return for a request.$sentinal$Responses to certain API requests, like list images, may return multiple items. The number of results returned can be explicitly controlled by specifying the limit parameter in the API request. However, if a limit parameter is not specified, this configuration value will be used as the default number of results to be returned for any API request.$sentinal$NOTES: * The value of this configuration option may not be greater than the value specified by api_limit_max . * Setting this to a very large value may slow down database queries and increase response times. Setting this to a very low value may result in poor user experience.$sentinal$Possible values: * Any positive integer$sentinal$Related options: * api_limit_max |
metadata_encryption_key = None |
(String) AES key for encrypting store location metadata.$sentinal$Provide a string value representing the AES cipher to use for encrypting Glance store metadata.$sentinal$NOTE: The AES key to use must be set to a random string of length 16, 24 or 32 bytes.$sentinal$Possible values: * String value representing a valid AES key$sentinal$Related options: * None |
metadata_source_path = /etc/glance/metadefs/ |
(String) Absolute path to the directory where JSON metadefs files are stored.$sentinal$Glance Metadata Definitions (“metadefs”) are served from the database, but are stored in files in the JSON format. The files in this directory are used to initialize the metadefs in the database. Additionally, when metadefs are exported from the database, the files are written to this directory.$sentinal$NOTE: If you plan to export metadefs, make sure that this directory has write permissions set for the user being used to run the glance-api service.$sentinal$Possible values: * String value representing a valid absolute pathname$sentinal$Related options: * None |
property_protection_file = None |
(String) The location of the property protection file.$sentinal$Provide a valid path to the property protection file which contains the rules for property protections and the roles/policies associated with them.$sentinal$A property protection file, when set, restricts the Glance image properties to be created, read, updated and/or deleted by a specific set of users that are identified by either roles or policies. If this configuration option is not set, by default, property protections won’t be enforced. If a value is specified and the file is not found, the glance-api service will fail to start. More information on property protections can be found at: http://docs.openstack.org/developer/glance/property-protections.html$sentinal$Possible values: * Empty string * Valid path to the property protection configuration file$sentinal$Related options: * property_protection_rule_format |
property_protection_rule_format = roles |
(String) Rule format for property protection.$sentinal$Provide the desired way to set property protection on Glance image properties. The two permissible values are roles and policies . The default value is roles .$sentinal$If the value is roles , the property protection file must contain a comma separated list of user roles indicating permissions for each of the CRUD operations on each property being protected. If set to policies , a policy defined in policy.json is used to express property protections for each of the CRUD operations. Examples of how property protections are enforced based on roles or policies can be found at: http://docs.openstack.org/developer/glance/property-protections.html#examples$sentinal$Possible values: * roles * policies$sentinal$Related options: * property_protection_file |
show_image_direct_url = False |
(Boolean) Show direct image location when returning an image.$sentinal$This configuration option indicates whether to show the direct image location when returning image details to the user. The direct image location is where the image data is stored in backend storage. This image location is shown under the image property direct_url .$sentinal$When multiple image locations exist for an image, the best location is displayed based on the location strategy indicated by the configuration option location_strategy .$sentinal$NOTES: * Revealing image locations can present a GRAVE SECURITY RISK as image locations can sometimes include credentials. Hence, this is set to False by default. Set this to True with EXTREME CAUTION and ONLY IF you know what you are doing! * If an operator wishes to avoid showing any image location(s) to the user, then both this option and show_multiple_locations MUST be set to False .$sentinal$Possible values: * True * False$sentinal$Related options: * show_multiple_locations * location_strategy |
user_storage_quota = 0 |
(String) Maximum amount of image storage per tenant.$sentinal$This enforces an upper limit on the cumulative storage consumed by all images of a tenant across all stores. This is a per-tenant limit.$sentinal$The default unit for this configuration option is Bytes. However, storage units can be specified using case-sensitive literals B , KB , MB , GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytes respectively. Note that there should not be any space between the value and unit. Value 0 signifies no quota enforcement. Negative values are invalid and result in errors.$sentinal$Possible values: * A string that is a valid concatenation of a non-negative integer representing the storage value and an optional string literal representing storage units as mentioned above.$sentinal$Related options: * None |
workers = None |
(Integer) Number of Glance worker processes to start.$sentinal$Provide a non-negative integer value to set the number of child process workers to service requests. By default, the number of CPUs available is set as the value for workers .$sentinal$Each worker process is made to listen on the port set in the configuration file and contains a greenthread pool of size 1000.$sentinal$NOTE: Setting the number of workers to zero, triggers the creation of a single API process with a greenthread pool of size 1000.$sentinal$Possible values: * 0 * Positive integer value (typically equal to the number of CPUs)$sentinal$Related options: * None |
[glance_store] | |
rootwrap_config = /etc/glance/rootwrap.conf |
(String) Path to the rootwrap configuration file to use for running commands as root.$sentinal$The cinder store requires root privileges to operate the image volumes (for connecting to iSCSI/FC volumes and reading/writing the volume data, etc.). The configuration file should allow the required commands by cinder store and os-brick library.$sentinal$Possible values: * Path to the rootwrap config file$sentinal$Related options: * None |
[image_format] | |
container_formats = ami, ari, aki, bare, ovf, ova, docker |
(List) Supported values for the ‘container_format’ image attribute |
disk_formats = ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso |
(List) Supported values for the ‘disk_format’ image attribute |
[task] | |
task_executor = taskflow |
(String) Task executor to be used to run task scripts.$sentinal$Provide a string value representing the executor to use for task executions. By default, TaskFlow executor is used.$sentinal$``TaskFlow`` helps make task executions easy, consistent, scalable and reliable. It also enables creation of lightweight task objects and/or functions that are combined together into flows in a declarative manner.$sentinal$Possible values: * taskflow$sentinal$Related Options: * None |
task_time_to_live = 48 |
(Integer) Time in hours for which a task lives after, either succeeding or failing |
work_dir = /work_dir |
(String) Absolute path to the work directory to use for asynchronous task operations.$sentinal$The directory set here will be used to operate over images - normally before they are imported in the destination store.$sentinal$NOTE: When providing a value for work_dir , please make sure that enough space is provided for concurrent tasks to run efficiently without running out of space.$sentinal$A rough estimation can be done by multiplying the number of max_workers with an average image size (e.g 500MB). The image size estimation should be done based on the average size in your deployment. Note that depending on the tasks running you may need to multiply this number by some factor depending on what the task does. For example, you may want to double the available size if image conversion is enabled. All this being said, remember these are just estimations and you should do them based on the worst case scenario and be prepared to act in case they were wrong.$sentinal$Possible values: * String value representing the absolute path to the working directory$sentinal$Related Options: * None |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
delayed_delete = False |
(Boolean) Turn on/off delayed delete.$sentinal$Typically when an image is deleted, the glance-api service puts the image into deleted state and deletes its data at the same time. Delayed delete is a feature in Glance that delays the actual deletion of image data until a later point in time (as determined by the configuration option scrub_time ). When delayed delete is turned on, the glance-api service puts the image into pending_delete state upon deletion and leaves the image data in the storage backend for the image scrubber to delete at a later time. The image scrubber will move the image into deleted state upon successful deletion of image data.$sentinal$NOTE: When delayed delete is turned on, image scrubber MUST be running as a periodic task to prevent the backend storage from filling up with undesired usage.$sentinal$Possible values: * True * False$sentinal$Related options: * scrub_time * wakeup_time * scrub_pool_size |
image_cache_dir = None |
(String) Base directory for image cache.$sentinal$This is the location where image data is cached and served out of. All cached images are stored directly under this directory. This directory also contains three subdirectories, namely, incomplete , invalid and queue .$sentinal$The incomplete subdirectory is the staging area for downloading images. An image is first downloaded to this directory. When the image download is successful it is moved to the base directory. However, if the download fails, the partially downloaded image file is moved to the invalid subdirectory.$sentinal$The queue``subdirectory is used for queuing images for download. This is used primarily by the cache-prefetcher, which can be scheduled as a periodic task like cache-pruner and cache-cleaner, to cache images ahead of their usage. Upon receiving the request to cache an image, Glance touches a file in the ``queue directory with the image id as the file name. The cache-prefetcher, when running, polls for the files in queue directory and starts downloading them in the order they were created. When the download is successful, the zero-sized file is deleted from the queue directory. If the download fails, the zero-sized file remains and it’ll be retried the next time cache-prefetcher runs.$sentinal$Possible values: * A valid path$sentinal$Related options: * image_cache_sqlite_db |
image_cache_driver = sqlite |
(String) The driver to use for image cache management.$sentinal$This configuration option provides the flexibility to choose between the different image-cache drivers available. An image-cache driver is responsible for providing the essential functions of image-cache like write images to/read images from cache, track age and usage of cached images, provide a list of cached images, fetch size of the cache, queue images for caching and clean up the cache, etc.$sentinal$The essential functions of a driver are defined in the base class glance.image_cache.drivers.base.Driver . All image-cache drivers (existing and prospective) must implement this interface. Currently available drivers are sqlite and xattr . These drivers primarily differ in the way they store the information about cached images: * The sqlite driver uses a sqlite database (which sits on every glance node locally) to track the usage of cached images. * The xattr driver uses the extended attributes of files to store this information. It also requires a filesystem that sets atime on the files when accessed.$sentinal$Possible values: * sqlite * xattr$sentinal$Related options: * None |
image_cache_max_size = 10737418240 |
(Integer) The upper limit on cache size, in bytes, after which the cache-pruner cleans up the image cache.$sentinal$NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a hard limit beyond which the image cache would never grow. In fact, depending on how often the cache-pruner runs and how quickly the cache fills, the image cache can far exceed the size specified here very easily. Hence, care must be taken to appropriately schedule the cache-pruner and in setting this limit.$sentinal$Glance caches an image when it is downloaded. Consequently, the size of the image cache grows over time as the number of downloads increases. To keep the cache size from becoming unmanageable, it is recommended to run the cache-pruner as a periodic task. When the cache pruner is kicked off, it compares the current size of image cache and triggers a cleanup if the image cache grew beyond the size specified here. After the cleanup, the size of cache is less than or equal to size specified here.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * None |
image_cache_sqlite_db = cache.db |
(String) The relative path to sqlite file database that will be used for image cache management.$sentinal$This is a relative path to the sqlite file database that tracks the age and usage statistics of image cache. The path is relative to image cache base directory, specified by the configuration option image_cache_dir .$sentinal$This is a lightweight database with just one table.$sentinal$Possible values: * A valid relative path to sqlite file database$sentinal$Related options: * image_cache_dir |
image_cache_stall_time = 86400 |
(Integer) The amount of time, in seconds, an incomplete image remains in the cache.$sentinal$Incomplete images are images for which download is in progress. Please see the description of configuration option image_cache_dir for more detail. Sometimes, due to various reasons, it is possible the download may hang and the incompletely downloaded image remains in the incomplete directory. This configuration option sets a time limit on how long the incomplete images should remain in the incomplete directory before they are cleaned up. Once an incomplete image spends more time than is specified here, it’ll be removed by cache-cleaner on its next run.$sentinal$It is recommended to run cache-cleaner as a periodic task on the Glance API nodes to keep the incomplete images from occupying disk space.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * None |
scrub_pool_size = 1 |
(Integer) The size of thread pool to be used for scrubbing images.$sentinal$When there are a large number of images to scrub, it is beneficial to scrub images in parallel so that the scrub queue stays in control and the backend storage is reclaimed in a timely fashion. This configuration option denotes the maximum number of images to be scrubbed in parallel. The default value is one, which signifies serial scrubbing. Any value above one indicates parallel scrubbing.$sentinal$Possible values: * Any non-zero positive integer$sentinal$Related options: * delayed_delete |
scrub_time = 0 |
(Integer) The amount of time, in seconds, to delay image scrubbing.$sentinal$When delayed delete is turned on, an image is put into pending_delete state upon deletion until the scrubber deletes its image data. Typically, soon after the image is put into pending_delete state, it is available for scrubbing. However, scrubbing can be delayed until a later point using this configuration option. This option denotes the time period an image spends in pending_delete state before it is available for scrubbing.$sentinal$It is important to realize that this has storage implications. The larger the scrub_time , the longer the time to reclaim backend storage from deleted images.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * delayed_delete |
Configuration option = Default value | Description |
---|---|
[profiler] | |
connection_string = messaging:// |
(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging.$sentinal$Examples of possible values:$sentinal$* messaging://: use oslo_messaging driver for sending notifications. |
enabled = False |
(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature).$sentinal$Possible values:$sentinal$* True: Enables the feature$sentinal$* False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. |
hmac_keys = SECRET_KEY |
(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project.$sentinal$Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
trace_sqlalchemy = False |
(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced).$sentinal$Possible values:$sentinal$* True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that.$sentinal$* False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
admin_password = None |
(String) DEPRECATED: The administrators password. If “use_user_token” is not in effect, then admin credentials can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support. |
admin_tenant_name = None |
(String) DEPRECATED: The tenant name of the administrative user. If “use_user_token” is not in effect, then admin tenant name can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support. |
admin_user = None |
(String) DEPRECATED: The administrators user name. If “use_user_token” is not in effect, then admin credentials can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support. |
auth_region = None |
(String) DEPRECATED: The region for the authentication service. If “use_user_token” is not in effect and using keystone auth, then region name can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support. |
auth_strategy = noauth |
(String) DEPRECATED: The strategy to use for authentication. If “use_user_token” is not in effect, then auth strategy can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support. |
auth_url = None |
(String) DEPRECATED: The URL to the keystone service. If “use_user_token” is not in effect and using keystone auth, then URL of keystone can be specified. This option was considered harmful and has been deprecated in M release. It will be removed in O release. For more information read OSSN-0060. Related functionality with uploading big images has been implemented with Keystone trusts support. |
registry_client_ca_file = /etc/ssl/cafile/file.ca |
(String) Absolute path to the Certificate Authority file.$sentinal$Provide a string value representing a valid absolute path to the certificate authority file to use for establishing a secure connection to the registry server.$sentinal$NOTE: This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CA_FILE environment variable may be set to a filepath of the CA file. This option is ignored if the registry_client_insecure option is set to True .$sentinal$Possible values: * String value representing a valid absolute path to the CA file.$sentinal$Related options: * registry_client_protocol * registry_client_insecure |
registry_client_cert_file = /etc/ssl/certs/file.crt |
(String) Absolute path to the certificate file.$sentinal$Provide a string value representing a valid absolute path to the certificate file to use for establishing a secure connection to the registry server.$sentinal$NOTE: This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_CERT_FILE environment variable may be set to a filepath of the certificate file.$sentinal$Possible values: * String value representing a valid absolute path to the certificate file.$sentinal$Related options: * registry_client_protocol |
registry_client_insecure = False |
(Boolean) Set verification of the registry server certificate.$sentinal$Provide a boolean value to determine whether or not to validate SSL connections to the registry server. By default, this option is set to False and the SSL connections are validated.$sentinal$If set to True , the connection to the registry server is not validated via a certifying authority and the registry_client_ca_file option is ignored. This is the registry’s equivalent of specifying –insecure on the command line using glanceclient for the API.$sentinal$Possible values: * True * False$sentinal$Related options: * registry_client_protocol * registry_client_ca_file |
registry_client_key_file = /etc/ssl/key/key-file.pem |
(String) Absolute path to the private key file.$sentinal$Provide a string value representing a valid absolute path to the private key file to use for establishing a secure connection to the registry server.$sentinal$NOTE: This option must be set if registry_client_protocol is set to https . Alternatively, the GLANCE_CLIENT_KEY_FILE environment variable may be set to a filepath of the key file.$sentinal$Possible values: * String value representing a valid absolute path to the key file.$sentinal$Related options: * registry_client_protocol |
registry_client_protocol = http |
(String) Protocol to use for communication with the registry server.$sentinal$Provide a string value representing the protocol to use for communication with the registry server. By default, this option is set to http and the connection is not secure.$sentinal$This option can be set to https to establish a secure connection to the registry server. In this case, provide a key to use for the SSL connection using the registry_client_key_file option. Also include the CA file and cert file using the options registry_client_ca_file and registry_client_cert_file respectively.$sentinal$Possible values: * http * https$sentinal$Related options: * registry_client_key_file * registry_client_cert_file * registry_client_ca_file |
registry_client_timeout = 600 |
(Integer) Timeout value for registry requests.$sentinal$Provide an integer value representing the period of time in seconds that the API server will wait for a registry request to complete. The default value is 600 seconds.$sentinal$A value of 0 implies that a request will never timeout.$sentinal$Possible values: * Zero * Positive integer$sentinal$Related options: * None |
registry_host = 0.0.0.0 |
(String) Address the registry server is hosted on.$sentinal$Possible values: * A valid IP or hostname$sentinal$Related options: * None |
registry_port = 9191 |
(Port number) Port the registry server is listening on.$sentinal$Possible values: * A valid port number$sentinal$Related options: * None |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
args = None |
(Multi-valued) Arguments for the command |
chunksize = 65536 |
(Integer) Amount of data to transfer per HTTP write. |
command = None |
(String) Command to be given to replicator |
dontreplicate = created_at date deleted_at location updated_at |
(String) List of fields to not replicate. |
mastertoken = |
(String) Pass in your authentication token if you have one. This is the token used for the master. |
metaonly = False |
(Boolean) Only replicate metadata, not images. |
slavetoken = |
(String) Pass in your authentication token if you have one. This is the token used for the slave. |
token = |
(String) Pass in your authentication token if you have one. If you use this option the same token is used for both the master and the slave. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
wakeup_time = 300 |
(Integer) Time interval, in seconds, between scrubber runs in daemon mode.$sentinal$Scrubber can be run either as a cron job or daemon. When run as a daemon, this configuration time specifies the time period between two runs. When the scrubber wakes up, it fetches and scrubs all pending_delete images that are available for scrubbing after taking scrub_time into consideration.$sentinal$If the wakeup time is set to a large number, there may be a large number of images to be scrubbed for each run. Also, this impacts how quickly the backend storage is reclaimed.$sentinal$Possible values: * Any non-negative integer$sentinal$Related options: * daemon * delayed_delete |
Configuration option = Default value | Description |
---|---|
[taskflow_executor] | |
conversion_format = raw |
(String) Set the desired image conversion format.$sentinal$Provide a valid image format to which you want images to be converted before they are stored for consumption by Glance. Appropriate image format conversions are desirable for specific storage backends in order to facilitate efficient handling of bandwidth and usage of the storage infrastructure.$sentinal$By default, conversion_format is not set and must be set explicitly in the configuration file.$sentinal$The allowed values for this option are raw , qcow2 and vmdk . The raw format is the unstructured disk format and should be chosen when RBD or Ceph storage backends are used for image storage. qcow2 is supported by the QEMU emulator that expands dynamically and supports Copy on Write. The vmdk is another common disk format supported by many common virtual machine monitors like VMWare Workstation.$sentinal$Possible values: * qcow2 * raw * vmdk$sentinal$Related options: * disk_formats |
engine_mode = parallel |
(String) Set the taskflow engine mode.$sentinal$Provide a string type value to set the mode in which the taskflow engine would schedule tasks to the workers on the hosts. Based on this mode, the engine executes tasks either in single or multiple threads. The possible values for this configuration option are: serial and parallel . When set to serial , the engine runs all the tasks in a single thread which results in serial execution of tasks. Setting this to parallel makes the engine run tasks in multiple threads. This results in parallel execution of tasks.$sentinal$Possible values: * serial * parallel$sentinal$Related options: * max_workers |
max_workers = 10 |
(Integer) Set the number of engine executable tasks.$sentinal$Provide an integer value to limit the number of workers that can be instantiated on the hosts. In other words, this number defines the number of parallel tasks that can be executed at the same time by the taskflow engine. This value can be greater than one when the engine mode is set to parallel.$sentinal$Possible values: * Integer value greater than or equal to 1$sentinal$Related options: * engine_mode |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
pydev_worker_debug_host = localhost |
(String) Host address of the pydev server.$sentinal$Provide a string value representing the hostname or IP of the pydev server to use for debugging. The pydev server listens for debug connections on this address, facilitating remote debugging in Glance.$sentinal$Possible values: * Valid hostname * Valid IP address$sentinal$Related options: * None |
pydev_worker_debug_port = 5678 |
(Port number) Port number that the pydev server will listen on.$sentinal$Provide a port number to bind the pydev server to. The pydev process accepts debug connections on this port and facilitates remote debugging in Glance.$sentinal$Possible values: * A valid port number$sentinal$Related options: * None |
The corresponding log file of each Image service is stored in the
/var/log/glance/
directory of the host on which each service runs.
Log filename | Service that logs to the file |
---|---|
api.log |
Image service API server |
registry.log |
Image service Registry server |
You can find the files that are described in this section in the
/etc/glance/
directory.
The configuration file for the Image service API is found in the
glance-api.conf
file.
This file must be modified after installation.
[DEFAULT]
#
# From glance.api
#
#
# Set the image owner to tenant or the authenticated user.
#
# Assign a boolean value to determine the owner of an image. When set to
# True, the owner of the image is the tenant. When set to False, the
# owner of the image will be the authenticated user issuing the request.
# Setting it to False makes the image private to the associated user and
# sharing with other users within the same tenant (or "project")
# requires explicit image sharing via image membership.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#owner_is_tenant = true
#
# Role used to identify an authenticated user as administrator.
#
# Provide a string value representing a Keystone role to identify an
# administrative user. Users with this role will be granted
# administrative privileges. The default value for this option is
# 'admin'.
#
# Possible values:
# * A string value which is a valid Keystone role
#
# Related options:
# * None
#
# (string value)
#admin_role = admin
#
# Allow limited access to unauthenticated users.
#
# Assign a boolean to determine API access for unathenticated
# users. When set to False, the API cannot be accessed by
# unauthenticated users. When set to True, unauthenticated users can
# access the API with read-only privileges. This however only applies
# when using ContextMiddleware.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#allow_anonymous_access = false
#
# Limit the request ID length.
#
# Provide an integer value to limit the length of the request ID to
# the specified length. The default value is 64. Users can change this
# to any ineteger value between 0 and 16384 however keeping in mind that
# a larger value may flood the logs.
#
# Possible values:
# * Integer value between 0 and 16384
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#max_request_id_length = 64
#
# Public url endpoint to use for Glance/Glare versions response.
#
# This is the public url endpoint that will appear in the Glance/Glare
# "versions" response. If no value is specified, the endpoint that is
# displayed in the version's response is that of the host running the
# API service. Change the endpoint to represent the proxy URL if the
# API service is running behind a proxy. If the service is running
# behind a load balancer, add the load balancer's URL for this value.
#
# Possible values:
# * None
# * Proxy URL
# * Load balancer URL
#
# Related options:
# * None
#
# (string value)
#public_endpoint = <None>
#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * image_property_quota
#
# (boolean value)
#allow_additional_image_properties = true
#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_member_quota = 128
#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
# * ``allow_additional_image_properties``
#
# (integer value)
#image_property_quota = 128
#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_tag_quota = 128
#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_location_quota = 10
#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
# * glance.db.sqlalchemy.api
# * glance.db.registry.api
# * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
# * enable_v2_api
# * enable_v2_registry
#
# (string value)
#data_api = glance.db.sqlalchemy.api
#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
# * The value of this configuration option may not be greater than
# the value specified by ``api_limit_max``.
# * Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * api_limit_max
#
# (integer value)
# Minimum value: 1
#limit_param_default = 25
#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * limit_param_default
#
# (integer value)
# Minimum value: 1
#api_limit_max = 1000
#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_multiple_locations
# * location_strategy
#
# (boolean value)
#show_image_direct_url = false
# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_image_direct_url
# * location_strategy
#
# (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false
#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
# * This value should only be increased after careful
# consideration and must be set less than or equal to
# 8 EiB (9223372036854775808).
# * This value must be set with careful consideration of the
# backend storage capacity. Setting this to a very low value
# may result in a large number of image failures. And, setting
# this to a very large value may result in faster consumption
# of storage. Hence, this must be set according to the nature of
# images created and storage capacity available.
#
# Possible values:
# * Any positive number less than or equal to 9223372036854775808
#
# (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776
#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
# * A string that is a valid concatenation of a non-negative integer
# representing the storage value and an optional string literal
# representing storage units as mentioned above.
#
# Related options:
# * None
#
# (string value)
#user_storage_quota = 0
#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
# * If this option is enabled, then ``enable_v1_registry`` must
# also be set to ``True`` to enable mandatory usage of Registry
# service with v1 API.
#
# * If this option is disabled, then the ``enable_v1_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v2_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v2 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_registry
# * enable_v2_api
#
# (boolean value)
#enable_v1_api = true
#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
# * If this option is disabled, then the ``enable_v2_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v1_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v1 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_registry
# * enable_v1_api
#
# (boolean value)
#enable_v2_api = true
#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
# * Use of Registry is mandatory in v1 API, so this option must
# be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
# * If deploying only the v2 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_api
#
# (boolean value)
#enable_v1_registry = true
#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
# * Use of Registry is optional in v2 API, so this option
# must only be enabled if both ``enable_v2_api`` is set to
# ``True`` and the ``data_api`` option is set to
# ``glance.db.registry.api``.
#
# * If deploying only the v1 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_api
# * data_api
#
# (boolean value)
#enable_v2_registry = true
#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
# * Valid hostname
# * Valid IP address
#
# Related options:
# * None
#
# (string value)
#pydev_worker_debug_host = localhost
#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
# * A valid port number
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678
#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
# * String value representing a valid AES key
#
# Related options:
# * None
#
# (string value)
#metadata_encryption_key = <None>
#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
# * An OpenSSL message digest algorithm identifier
#
# Relation options:
# * None
#
# (string value)
#digest_algorithm = sha256
#
# Strategy to determine the preference order of image locations.
#
# This configuration option indicates the strategy to determine
# the order in which an image's locations must be accessed to
# serve the image's data. Glance then retrieves the image data
# from the first responsive active location it finds in this list.
#
# This option takes one of two possible values ``location_order``
# and ``store_type``. The default value is ``location_order``,
# which suggests that image data be served by using locations in
# the order they are stored in Glance. The ``store_type`` value
# sets the image location preference based on the order in which
# the storage backends are listed as a comma separated list for
# the configuration option ``store_type_preference``.
#
# Possible values:
# * location_order
# * store_type
#
# Related options:
# * store_type_preference
#
# (string value)
# Allowed values: location_order, store_type
#location_strategy = location_order
#
# The location of the property protection file.
#
# Provide a valid path to the property protection file which contains
# the rules for property protections and the roles/policies associated
# with them.
#
# A property protection file, when set, restricts the Glance image
# properties to be created, read, updated and/or deleted by a specific
# set of users that are identified by either roles or policies.
# If this configuration option is not set, by default, property
# protections won't be enforced. If a value is specified and the file
# is not found, the glance-api service will fail to start.
# More information on property protections can be found at:
# http://docs.openstack.org/developer/glance/property-protections.html
#
# Possible values:
# * Empty string
# * Valid path to the property protection configuration file
#
# Related options:
# * property_protection_rule_format
#
# (string value)
#property_protection_file = <None>
#
# Rule format for property protection.
#
# Provide the desired way to set property protection on Glance
# image properties. The two permissible values are ``roles``
# and ``policies``. The default value is ``roles``.
#
# If the value is ``roles``, the property protection file must
# contain a comma separated list of user roles indicating
# permissions for each of the CRUD operations on each property
# being protected. If set to ``policies``, a policy defined in
# policy.json is used to express property protections for each
# of the CRUD operations. Examples of how property protections
# are enforced based on ``roles`` or ``policies`` can be found at:
# http://docs.openstack.org/developer/glance/property-protections.html#examples
#
# Possible values:
# * roles
# * policies
#
# Related options:
# * property_protection_file
#
# (string value)
# Allowed values: roles, policies
#property_protection_rule_format = roles
#
# List of allowed exception modules to handle RPC exceptions.
#
# Provide a comma separated list of modules whose exceptions are
# permitted to be recreated upon receiving exception data via an RPC
# call made to Glance. The default list includes
# ``glance.common.exception``, ``builtins``, and ``exceptions``.
#
# The RPC protocol permits interaction with Glance via calls across a
# network or within the same system. Including a list of exception
# namespaces with this option enables RPC to propagate the exceptions
# back to the users.
#
# Possible values:
# * A comma separated list of valid exception modules
#
# Related options:
# * None
# (list value)
#allowed_rpc_exception_modules = glance.common.exception,builtins,exceptions
#
# IP address to bind the glance servers to.
#
# Provide an IP address to bind the glance server to. The default
# value is ``0.0.0.0``.
#
# Edit this option to enable the server to listen on one particular
# IP address on the network card. This facilitates selection of a
# particular network interface for the server.
#
# Possible values:
# * A valid IPv4 address
# * A valid IPv6 address
#
# Related options:
# * None
#
# (string value)
#bind_host = 0.0.0.0
#
# Port number on which the server will listen.
#
# Provide a valid port number to bind the server's socket to. This
# port is then set to identify processes and forward network messages
# that arrive at the server. The default bind_port value for the API
# server is 9292 and for the registry server is 9191.
#
# Possible values:
# * A valid port number (0 to 65535)
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#bind_port = <None>
#
# Number of Glance worker processes to start.
#
# Provide a non-negative integer value to set the number of child
# process workers to service requests. By default, the number of CPUs
# available is set as the value for ``workers``.
#
# Each worker process is made to listen on the port set in the
# configuration file and contains a greenthread pool of size 1000.
#
# NOTE: Setting the number of workers to zero, triggers the creation
# of a single API process with a greenthread pool of size 1000.
#
# Possible values:
# * 0
# * Positive integer value (typically equal to the number of CPUs)
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#workers = <None>
#
# Maximum line size of message headers.
#
# Provide an integer value representing a length to limit the size of
# message headers. The default value is 16384.
#
# NOTE: ``max_header_line`` may need to be increased when using large
# tokens (typically those generated by the Keystone v3 API with big
# service catalogs). However, it is to be kept in mind that larger
# values for ``max_header_line`` would flood the logs.
#
# Setting ``max_header_line`` to 0 sets no limit for the line size of
# message headers.
#
# Possible values:
# * 0
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#max_header_line = 16384
#
# Set keep alive option for HTTP over TCP.
#
# Provide a boolean value to determine sending of keep alive packets.
# If set to ``False``, the server returns the header
# "Connection: close". If set to ``True``, the server returns a
# "Connection: Keep-Alive" in its responses. This enables retention of
# the same TCP connection for HTTP conversations instead of opening a
# new one with each new request.
#
# This option must be set to ``False`` if the client socket connection
# needs to be closed explicitly after the response is received and
# read successfully by the client.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#http_keepalive = true
#
# Timeout for client connections' socket operations.
#
# Provide a valid integer value representing time in seconds to set
# the period of wait before an incoming connection can be closed. The
# default value is 900 seconds.
#
# The value zero implies wait forever.
#
# Possible values:
# * Zero
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#client_socket_timeout = 900
#
# Set the number of incoming connection requests.
#
# Provide a positive integer value to limit the number of requests in
# the backlog queue. The default queue size is 4096.
#
# An incoming connection to a TCP listener socket is queued before a
# connection can be established with the server. Setting the backlog
# for a TCP socket ensures a limited queue size for incoming traffic.
#
# Possible values:
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#backlog = 4096
#
# Set the wait time before a connection recheck.
#
# Provide a positive integer value representing time in seconds which
# is set as the idle wait time before a TCP keep alive packet can be
# sent to the host. The default value is 600 seconds.
#
# Setting ``tcp_keepidle`` helps verify at regular intervals that a
# connection is intact and prevents frequent TCP connection
# reestablishment.
#
# Possible values:
# * Positive integer value representing time in seconds
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#tcp_keepidle = 600
#
# Absolute path to the CA file.
#
# Provide a string value representing a valid absolute path to
# the Certificate Authority file to use for client authentication.
#
# A CA file typically contains necessary trusted certificates to
# use for the client authentication. This is essential to ensure
# that a secure connection is established to the server via the
# internet.
#
# Possible values:
# * Valid absolute path to the CA file
#
# Related options:
# * None
#
# (string value)
#ca_file = /etc/ssl/cafile
#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file which is required to start the API service
# securely.
#
# A certificate file typically is a public key container and includes
# the server's public key, server name, server information and the
# signature which was a result of the verification process using the
# CA certificate. This is required for a secure connection
# establishment.
#
# Possible values:
# * Valid absolute path to the certificate file
#
# Related options:
# * None
#
# (string value)
#cert_file = /etc/ssl/certs
#
# Absolute path to a private key file.
#
# Provide a string value representing a valid absolute path to a
# private key file which is required to establish the client-server
# connection.
#
# Possible values:
# * Absolute path to the private key file
#
# Related options:
# * None
#
# (string value)
#key_file = /etc/ssl/key/key-file.pem
# DEPRECATED: The HTTP header used to determine the scheme for the original
# request, even if it was removed by an SSL terminating proxy. Typical value is
# "HTTP_X_FORWARDED_PROTO". (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use the http_proxy_to_wsgi middleware instead.
#secure_proxy_ssl_header = <None>
#
# The relative path to sqlite file database that will be used for image cache
# management.
#
# This is a relative path to the sqlite file database that tracks the age and
# usage statistics of image cache. The path is relative to image cache base
# directory, specified by the configuration option ``image_cache_dir``.
#
# This is a lightweight database with just one table.
#
# Possible values:
# * A valid relative path to sqlite file database
#
# Related options:
# * ``image_cache_dir``
#
# (string value)
#image_cache_sqlite_db = cache.db
#
# The driver to use for image cache management.
#
# This configuration option provides the flexibility to choose between the
# different image-cache drivers available. An image-cache driver is responsible
# for providing the essential functions of image-cache like write images to/read
# images from cache, track age and usage of cached images, provide a list of
# cached images, fetch size of the cache, queue images for caching and clean up
# the cache, etc.
#
# The essential functions of a driver are defined in the base class
# ``glance.image_cache.drivers.base.Driver``. All image-cache drivers (existing
# and prospective) must implement this interface. Currently available drivers
# are ``sqlite`` and ``xattr``. These drivers primarily differ in the way they
# store the information about cached images:
# * The ``sqlite`` driver uses a sqlite database (which sits on every glance
# node locally) to track the usage of cached images.
# * The ``xattr`` driver uses the extended attributes of files to store this
# information. It also requires a filesystem that sets ``atime`` on the
# files
# when accessed.
#
# Possible values:
# * sqlite
# * xattr
#
# Related options:
# * None
#
# (string value)
# Allowed values: sqlite, xattr
#image_cache_driver = sqlite
#
# The upper limit on cache size, in bytes, after which the cache-pruner cleans
# up the image cache.
#
# NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a
# hard limit beyond which the image cache would never grow. In fact, depending
# on how often the cache-pruner runs and how quickly the cache fills, the image
# cache can far exceed the size specified here very easily. Hence, care must be
# taken to appropriately schedule the cache-pruner and in setting this limit.
#
# Glance caches an image when it is downloaded. Consequently, the size of the
# image cache grows over time as the number of downloads increases. To keep the
# cache size from becoming unmanageable, it is recommended to run the
# cache-pruner as a periodic task. When the cache pruner is kicked off, it
# compares the current size of image cache and triggers a cleanup if the image
# cache grew beyond the size specified here. After the cleanup, the size of
# cache is less than or equal to size specified here.
#
# Possible values:
# * Any non-negative integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#image_cache_max_size = 10737418240
#
# The amount of time, in seconds, an incomplete image remains in the cache.
#
# Incomplete images are images for which download is in progress. Please see the
# description of configuration option ``image_cache_dir`` for more detail.
# Sometimes, due to various reasons, it is possible the download may hang and
# the incompletely downloaded image remains in the ``incomplete`` directory.
# This configuration option sets a time limit on how long the incomplete images
# should remain in the ``incomplete`` directory before they are cleaned up.
# Once an incomplete image spends more time than is specified here, it'll be
# removed by cache-cleaner on its next run.
#
# It is recommended to run cache-cleaner as a periodic task on the Glance API
# nodes to keep the incomplete images from occupying disk space.
#
# Possible values:
# * Any non-negative integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#image_cache_stall_time = 86400
#
# Base directory for image cache.
#
# This is the location where image data is cached and served out of. All cached
# images are stored directly under this directory. This directory also contains
# three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``.
#
# The ``incomplete`` subdirectory is the staging area for downloading images. An
# image is first downloaded to this directory. When the image download is
# successful it is moved to the base directory. However, if the download fails,
# the partially downloaded image file is moved to the ``invalid`` subdirectory.
#
# The ``queue``subdirectory is used for queuing images for download. This is
# used primarily by the cache-prefetcher, which can be scheduled as a periodic
# task like cache-pruner and cache-cleaner, to cache images ahead of their
# usage.
# Upon receiving the request to cache an image, Glance touches a file in the
# ``queue`` directory with the image id as the file name. The cache-prefetcher,
# when running, polls for the files in ``queue`` directory and starts
# downloading them in the order they were created. When the download is
# successful, the zero-sized file is deleted from the ``queue`` directory.
# If the download fails, the zero-sized file remains and it'll be retried the
# next time cache-prefetcher runs.
#
# Possible values:
# * A valid path
#
# Related options:
# * ``image_cache_sqlite_db``
#
# (string value)
#image_cache_dir = <None>
#
# Default publisher_id for outgoing Glance notifications.
#
# This is the value that the notification driver will use to identify
# messages for events originating from the Glance service. Typically,
# this is the hostname of the instance that generated the message.
#
# Possible values:
# * Any reasonable instance identifier, for example: image.host1
#
# Related options:
# * None
#
# (string value)
#default_publisher_id = image.localhost
#
# List of notifications to be disabled.
#
# Specify a list of notifications that should not be emitted.
# A notification can be given either as a notification type to
# disable a single event notification, or as a notification group
# prefix to disable all event notifications within a group.
#
# Possible values:
# A comma-separated list of individual notification types or
# notification groups to be disabled. Currently supported groups:
# * image
# * image.member
# * task
# * metadef_namespace
# * metadef_object
# * metadef_property
# * metadef_resource_type
# * metadef_tag
# For a complete listing and description of each event refer to:
# http://docs.openstack.org/developer/glance/notifications.html
#
# The values must be specified as: <group_name>.<event_name>
# For example: image.create,task.success,metadef_tag
#
# Related options:
# * None
#
# (list value)
#disabled_notifications =
#
# Address the registry server is hosted on.
#
# Possible values:
# * A valid IP or hostname
#
# Related options:
# * None
#
# (string value)
#registry_host = 0.0.0.0
#
# Port the registry server is listening on.
#
# Possible values:
# * A valid port number
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#registry_port = 9191
# DEPRECATED: Whether to pass through the user token when making requests to the
# registry. To prevent failures with token expiration during big files upload,
# it is recommended to set this parameter to False.If "use_user_token" is not in
# effect, then admin credentials can be specified. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#use_user_token = true
# DEPRECATED: The administrators user name. If "use_user_token" is not in
# effect, then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_user = <None>
# DEPRECATED: The administrators password. If "use_user_token" is not in effect,
# then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_password = <None>
# DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
# not in effect, then admin tenant name can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_tenant_name = <None>
# DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
# effect and using keystone auth, then URL of keystone can be specified. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_url = <None>
# DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
# in effect, then auth strategy can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_strategy = noauth
# DEPRECATED: The region for the authentication service. If "use_user_token" is
# not in effect and using keystone auth, then region name can be specified.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_region = <None>
#
# Protocol to use for communication with the registry server.
#
# Provide a string value representing the protocol to use for
# communication with the registry server. By default, this option is
# set to ``http`` and the connection is not secure.
#
# This option can be set to ``https`` to establish a secure connection
# to the registry server. In this case, provide a key to use for the
# SSL connection using the ``registry_client_key_file`` option. Also
# include the CA file and cert file using the options
# ``registry_client_ca_file`` and ``registry_client_cert_file``
# respectively.
#
# Possible values:
# * http
# * https
#
# Related options:
# * registry_client_key_file
# * registry_client_cert_file
# * registry_client_ca_file
#
# (string value)
# Allowed values: http, https
#registry_client_protocol = http
#
# Absolute path to the private key file.
#
# Provide a string value representing a valid absolute path to the
# private key file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
# environment variable may be set to a filepath of the key file.
#
# Possible values:
# * String value representing a valid absolute path to the key
# file.
#
# Related options:
# * registry_client_protocol
#
# (string value)
#registry_client_key_file = /etc/ssl/key/key-file.pem
#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
# environment variable may be set to a filepath of the certificate
# file.
#
# Possible values:
# * String value representing a valid absolute path to the
# certificate file.
#
# Related options:
# * registry_client_protocol
#
# (string value)
#registry_client_cert_file = /etc/ssl/certs/file.crt
#
# Absolute path to the Certificate Authority file.
#
# Provide a string value representing a valid absolute path to the
# certificate authority file to use for establishing a secure
# connection to the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
# environment variable may be set to a filepath of the CA file.
# This option is ignored if the ``registry_client_insecure`` option
# is set to ``True``.
#
# Possible values:
# * String value representing a valid absolute path to the CA
# file.
#
# Related options:
# * registry_client_protocol
# * registry_client_insecure
#
# (string value)
#registry_client_ca_file = /etc/ssl/cafile/file.ca
#
# Set verification of the registry server certificate.
#
# Provide a boolean value to determine whether or not to validate
# SSL connections to the registry server. By default, this option
# is set to ``False`` and the SSL connections are validated.
#
# If set to ``True``, the connection to the registry server is not
# validated via a certifying authority and the
# ``registry_client_ca_file`` option is ignored. This is the
# registry's equivalent of specifying --insecure on the command line
# using glanceclient for the API.
#
# Possible values:
# * True
# * False
#
# Related options:
# * registry_client_protocol
# * registry_client_ca_file
#
# (boolean value)
#registry_client_insecure = false
#
# Timeout value for registry requests.
#
# Provide an integer value representing the period of time in seconds
# that the API server will wait for a registry request to complete.
# The default value is 600 seconds.
#
# A value of 0 implies that a request will never timeout.
#
# Possible values:
# * Zero
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#registry_client_timeout = 600
#
# Send headers received from identity when making requests to
# registry.
#
# Typically, Glance registry can be deployed in multiple flavors,
# which may or may not include authentication. For example,
# ``trusted-auth`` is a flavor that does not require the registry
# service to authenticate the requests it receives. However, the
# registry service may still need a user context to be populated to
# serve the requests. This can be achieved by the caller
# (the Glance API usually) passing through the headers it received
# from authenticating with identity for the same request. The typical
# headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,
# ``X-Identity-Status`` and ``X-Service-Catalog``.
#
# Provide a boolean value to determine whether to send the identity
# headers to provide tenant and user information along with the
# requests to registry service. By default, this option is set to
# ``False``, which means that user and tenant information is not
# available readily. It must be obtained by authenticating. Hence, if
# this is set to ``False``, ``flavor`` must be set to value that
# either includes authentication or authenticated user context.
#
# Possible values:
# * True
# * False
#
# Related options:
# * flavor
#
# (boolean value)
#send_identity_headers = false
#
# The amount of time, in seconds, to delay image scrubbing.
#
# When delayed delete is turned on, an image is put into ``pending_delete``
# state upon deletion until the scrubber deletes its image data. Typically, soon
# after the image is put into ``pending_delete`` state, it is available for
# scrubbing. However, scrubbing can be delayed until a later point using this
# configuration option. This option denotes the time period an image spends in
# ``pending_delete`` state before it is available for scrubbing.
#
# It is important to realize that this has storage implications. The larger the
# ``scrub_time``, the longer the time to reclaim backend storage from deleted
# images.
#
# Possible values:
# * Any non-negative integer
#
# Related options:
# * ``delayed_delete``
#
# (integer value)
# Minimum value: 0
#scrub_time = 0
#
# The size of thread pool to be used for scrubbing images.
#
# When there are a large number of images to scrub, it is beneficial to scrub
# images in parallel so that the scrub queue stays in control and the backend
# storage is reclaimed in a timely fashion. This configuration option denotes
# the maximum number of images to be scrubbed in parallel. The default value is
# one, which signifies serial scrubbing. Any value above one indicates parallel
# scrubbing.
#
# Possible values:
# * Any non-zero positive integer
#
# Related options:
# * ``delayed_delete``
#
# (integer value)
# Minimum value: 1
#scrub_pool_size = 1
#
# Turn on/off delayed delete.
#
# Typically when an image is deleted, the ``glance-api`` service puts the image
# into ``deleted`` state and deletes its data at the same time. Delayed delete
# is a feature in Glance that delays the actual deletion of image data until a
# later point in time (as determined by the configuration option
# ``scrub_time``).
# When delayed delete is turned on, the ``glance-api`` service puts the image
# into ``pending_delete`` state upon deletion and leaves the image data in the
# storage backend for the image scrubber to delete at a later time. The image
# scrubber will move the image into ``deleted`` state upon successful deletion
# of image data.
#
# NOTE: When delayed delete is turned on, image scrubber MUST be running as a
# periodic task to prevent the backend storage from filling up with undesired
# usage.
#
# Possible values:
# * True
# * False
#
# Related options:
# * ``scrub_time``
# * ``wakeup_time``
# * ``scrub_pool_size``
#
# (boolean value)
#delayed_delete = false
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
#
# From oslo.messaging
#
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2
# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>
# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack
[cors]
#
# From oslo.middleware.cors
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Image-Meta-Checksum,X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request. (list
# value)
#allow_headers = Content-MD5,X-Image-Meta-Checksum,X-Storage-Token,Accept-Encoding,X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
[cors.subdomain]
#
# From oslo.middleware.cors
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Image-Meta-Checksum,X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request. (list
# value)
#allow_headers = Content-MD5,X-Image-Meta-Checksum,X-Storage-Token,Accept-Encoding,X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
[database]
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
#
# From oslo.db.concurrency
#
# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false
[glance_store]
#
# From glance.store
#
#
# List of enabled Glance stores.
#
# Register the storage backends to use for storing disk images
# as a comma separated list. The default stores enabled for
# storing disk images with Glance are ``file`` and ``http``.
#
# Possible values:
# * A comma separated list that could include:
# * file
# * http
# * swift
# * rbd
# * sheepdog
# * cinder
# * vmware
#
# Related Options:
# * default_store
#
# (list value)
#stores = file,http
#
# The default scheme to use for storing images.
#
# Provide a string value representing the default scheme to use for
# storing images. If not set, Glance uses ``file`` as the default
# scheme to store images with the ``file`` store.
#
# NOTE: The value given for this configuration option must be a valid
# scheme for a store registered with the ``stores`` configuration
# option.
#
# Possible values:
# * file
# * filesystem
# * http
# * https
# * swift
# * swift+http
# * swift+https
# * swift+config
# * rbd
# * sheepdog
# * cinder
# * vsphere
#
# Related Options:
# * stores
#
# (string value)
# Allowed values: file, filesystem, http, https, swift, swift+http, swift+https, swift+config, rbd, sheepdog, cinder, vsphere
#default_store = file
#
# Minimum interval in seconds to execute updating dynamic storage
# capabilities based on current backend status.
#
# Provide an integer value representing time in seconds to set the
# minimum interval before an update of dynamic storage capabilities
# for a storage backend can be attempted. Setting
# ``store_capabilities_update_min_interval`` does not mean updates
# occur periodically based on the set interval. Rather, the update
# is performed at the elapse of this interval set, if an operation
# of the store is triggered.
#
# By default, this option is set to zero and is disabled. Provide an
# integer value greater than zero to enable this option.
#
# NOTE: For more information on store capabilities and their updates,
# please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo
# /store-capabilities.html
#
# For more information on setting up a particular store in your
# deplyment and help with the usage of this feature, please contact
# the storage driver maintainers listed here:
# http://docs.openstack.org/developer/glance_store/drivers/index.html
#
# Possible values:
# * Zero
# * Positive integer
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#store_capabilities_update_min_interval = 0
#
# Information to match when looking for cinder in the service catalog.
#
# When the ``cinder_endpoint_template`` is not set and any of
# ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, ``cinder_store_password`` is not set,
# cinder store uses this information to lookup cinder endpoint from the service
# catalog in the current context. ``cinder_os_region_name``, if set, is taken
# into consideration to fetch the appropriate endpoint.
#
# The service catalog can be listed by the ``openstack catalog list`` command.
#
# Possible values:
# * A string of of the following form:
# ``<service_type>:<service_name>:<endpoint_type>``
# At least ``service_type`` and ``endpoint_type`` should be specified.
# ``service_name`` can be omitted.
#
# Related options:
# * cinder_os_region_name
# * cinder_endpoint_template
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
# * cinder_store_password
#
# (string value)
#cinder_catalog_info = volumev2::publicURL
#
# Override service catalog lookup with template for cinder endpoint.
#
# When this option is set, this value is used to generate cinder endpoint,
# instead of looking up from the service catalog.
# This value is ignored if ``cinder_store_auth_address``,
# ``cinder_store_user_name``, ``cinder_store_project_name``, and
# ``cinder_store_password`` are specified.
#
# If this configuration option is set, ``cinder_catalog_info`` will be ignored.
#
# Possible values:
# * URL template string for cinder endpoint, where ``%%(tenant)s`` is
# replaced with the current tenant (project) name.
# For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s``
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
# * cinder_store_password
# * cinder_catalog_info
#
# (string value)
#cinder_endpoint_template = <None>
#
# Region name to lookup cinder service from the service catalog.
#
# This is used only when ``cinder_catalog_info`` is used for determining the
# endpoint. If set, the lookup for cinder endpoint by this node is filtered to
# the specified region. It is useful when multiple regions are listed in the
# catalog. If this is not set, the endpoint is looked up from every region.
#
# Possible values:
# * A string that is a valid region name.
#
# Related options:
# * cinder_catalog_info
#
# (string value)
# Deprecated group/name - [glance_store]/os_region_name
#cinder_os_region_name = <None>
#
# Location of a CA certificates file used for cinder client requests.
#
# The specified CA certificates file, if set, is used to verify cinder
# connections via HTTPS endpoint. If the endpoint is HTTP, this value is
# ignored.
# ``cinder_api_insecure`` must be set to ``True`` to enable the verification.
#
# Possible values:
# * Path to a ca certificates file
#
# Related options:
# * cinder_api_insecure
#
# (string value)
#cinder_ca_certificates_file = <None>
#
# Number of cinderclient retries on failed http calls.
#
# When a call failed by any errors, cinderclient will retry the call up to the
# specified times after sleeping a few seconds.
#
# Possible values:
# * A positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#cinder_http_retries = 3
#
# Time period, in seconds, to wait for a cinder volume transition to
# complete.
#
# When the cinder volume is created, deleted, or attached to the glance node to
# read/write the volume data, the volume's state is changed. For example, the
# newly created volume status changes from ``creating`` to ``available`` after
# the creation process is completed. This specifies the maximum time to wait for
# the status change. If a timeout occurs while waiting, or the status is changed
# to an unexpected value (e.g. `error``), the image creation fails.
#
# Possible values:
# * A positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#cinder_state_transition_timeout = 300
#
# Allow to perform insecure SSL requests to cinder.
#
# If this option is set to True, HTTPS endpoint connection is verified using the
# CA certificates file specified by ``cinder_ca_certificates_file`` option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * cinder_ca_certificates_file
#
# (boolean value)
#cinder_api_insecure = false
#
# The address where the cinder authentication service is listening.
#
# When all of ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, and ``cinder_store_password`` options are
# specified, the specified values are always used for the authentication.
# This is useful to hide the image volumes from users by storing them in a
# project/tenant specific to the image service. It also enables users to share
# the image volume among other projects under the control of glance's ACL.
#
# If either of these options are not set, the cinder endpoint is looked up
# from the service catalog, and current context's user and project are used.
#
# Possible values:
# * A valid authentication service address, for example:
# ``http://openstack.example.org/identity/v2.0``
#
# Related options:
# * cinder_store_user_name
# * cinder_store_password
# * cinder_store_project_name
#
# (string value)
#cinder_store_auth_address = <None>
#
# User name to authenticate against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
# * A valid user name
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_password
# * cinder_store_project_name
#
# (string value)
#cinder_store_user_name = <None>
#
# Password for the user authenticating against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
# * A valid password for the user specified by ``cinder_store_user_name``
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
#
# (string value)
#cinder_store_password = <None>
#
# Project name where the image volume is stored in cinder.
#
# If this configuration option is not set, the project in current context is
# used.
#
# This must be used with all the following related options. If any of these are
# not specified, the project of the current context is used.
#
# Possible values:
# * A valid project name
#
# Related options:
# * ``cinder_store_auth_address``
# * ``cinder_store_user_name``
# * ``cinder_store_password``
#
# (string value)
#cinder_store_project_name = <None>
#
# Path to the rootwrap configuration file to use for running commands as root.
#
# The cinder store requires root privileges to operate the image volumes (for
# connecting to iSCSI/FC volumes and reading/writing the volume data, etc.).
# The configuration file should allow the required commands by cinder store and
# os-brick library.
#
# Possible values:
# * Path to the rootwrap config file
#
# Related options:
# * None
#
# (string value)
#rootwrap_config = /etc/glance/rootwrap.conf
#
# Directory to which the filesystem backend store writes images.
#
# Upon start up, Glance creates the directory if it doesn't already
# exist and verifies write access to the user under which
# ``glance-api`` runs. If the write access isn't available, a
# ``BadStoreConfiguration`` exception is raised and the filesystem
# store may not be available for adding new images.
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
# * A valid path to a directory
#
# Related options:
# * ``filesystem_store_datadirs``
# * ``filesystem_store_file_perm``
#
# (string value)
#filesystem_store_datadir = /var/lib/glance/images
#
# List of directories and their priorities to which the filesystem
# backend store writes images.
#
# The filesystem store can be configured to store images in multiple
# directories as opposed to using a single directory specified by the
# ``filesystem_store_datadir`` configuration option. When using
# multiple directories, each directory can be given an optional
# priority to specify the preference order in which they should
# be used. Priority is an integer that is concatenated to the
# directory path with a colon where a higher value indicates higher
# priority. When two directories have the same priority, the directory
# with most free space is used. When no priority is specified, it
# defaults to zero.
#
# More information on configuring filesystem store with multiple store
# directories can be found at
# http://docs.openstack.org/developer/glance/configuring.html
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
# * List of strings of the following form:
# * ``<a valid directory path>:<optional integer priority>``
#
# Related options:
# * ``filesystem_store_datadir``
# * ``filesystem_store_file_perm``
#
# (multi valued)
#filesystem_store_datadirs =
#
# Filesystem store metadata file.
#
# The path to a file which contains the metadata to be returned with
# any location associated with the filesystem store. The file must
# contain a valid JSON object. The object should contain the keys
# ``id`` and ``mountpoint``. The value for both keys should be a
# string.
#
# Possible values:
# * A valid path to the store metadata file
#
# Related options:
# * None
#
# (string value)
#filesystem_store_metadata_file = <None>
#
# File access permissions for the image files.
#
# Set the intended file access permissions for image data. This provides
# a way to enable other services, e.g. Nova, to consume images directly
# from the filesystem store. The users running the services that are
# intended to be given access to could be made a member of the group
# that owns the files created. Assigning a value less then or equal to
# zero for this configuration option signifies that no changes be made
# to the default permissions. This value will be decoded as an octal
# digit.
#
# For more information, please refer the documentation at
# http://docs.openstack.org/developer/glance/configuring.html
#
# Possible values:
# * A valid file access permission
# * Zero
# * Any negative integer
#
# Related options:
# * None
#
# (integer value)
#filesystem_store_file_perm = 0
#
# Path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Certificate Authority file to verify the remote server certificate. If
# this option is set, the ``https_insecure`` option will be ignored and
# the CA file specified will be used to authenticate the server
# certificate and establish a secure connection to the server.
#
# Possible values:
# * A valid path to a CA file
#
# Related options:
# * https_insecure
#
# (string value)
#https_ca_certificates_file = <None>
#
# Set verification of the remote server certificate.
#
# This configuration option takes in a boolean value to determine
# whether or not to verify the remote server certificate. If set to
# True, the remote server certificate is not verified. If the option is
# set to False, then the default CA truststore is used for verification.
#
# This option is ignored if ``https_ca_certificates_file`` is set.
# The remote server certificate will then be verified using the file
# specified using the ``https_ca_certificates_file`` option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * https_ca_certificates_file
#
# (boolean value)
#https_insecure = true
#
# The http/https proxy information to be used to connect to the remote
# server.
#
# This configuration option specifies the http/https proxy information
# that should be used to connect to the remote server. The proxy
# information should be a key value pair of the scheme and proxy, for
# example, http:10.0.0.1:3128. You can also specify proxies for multiple
# schemes by separating the key value pairs with a comma, for example,
# http:10.0.0.1:3128, https:10.0.0.1:1080.
#
# Possible values:
# * A comma separated list of scheme:proxy pairs as described above
#
# Related options:
# * None
#
# (dict value)
#http_proxy_information =
#
# Size, in megabytes, to chunk RADOS images into.
#
# Provide an integer value representing the size in megabytes to chunk
# Glance images into. The default chunk size is 8 megabytes. For optimal
# performance, the value should be a power of two.
#
# When Ceph's RBD object storage system is used as the storage backend
# for storing Glance images, the images are chunked into objects of the
# size set using this option. These chunked objects are then stored
# across the distributed block data store to use for Glance.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#rbd_store_chunk_size = 8
#
# RADOS pool in which images are stored.
#
# When RBD is used as the storage backend for storing Glance images, the
# images are stored by means of logical grouping of the objects (chunks
# of images) into a ``pool``. Each pool is defined with the number of
# placement groups it can contain. The default pool that is used is
# 'images'.
#
# More information on the RBD storage backend can be found here:
# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
#
# Possible Values:
# * A valid pool name
#
# Related options:
# * None
#
# (string value)
#rbd_store_pool = images
#
# RADOS user to authenticate as.
#
# This configuration option takes in the RADOS user to authenticate as.
# This is only needed when RADOS authentication is enabled and is
# applicable only if the user is using Cephx authentication. If the
# value for this option is not set by the user or is set to None, a
# default value will be chosen, which will be based on the client.
# section in rbd_store_ceph_conf.
#
# Possible Values:
# * A valid RADOS user
#
# Related options:
# * rbd_store_ceph_conf
#
# (string value)
#rbd_store_user = <None>
#
# Ceph configuration file path.
#
# This configuration option takes in the path to the Ceph configuration
# file to be used. If the value for this option is not set by the user
# or is set to None, librados will locate the default configuration file
# which is located at /etc/ceph/ceph.conf. If using Cephx
# authentication, this file should include a reference to the right
# keyring in a client.<USER> section
#
# Possible Values:
# * A valid path to a configuration file
#
# Related options:
# * rbd_store_user
#
# (string value)
#rbd_store_ceph_conf = /etc/ceph/ceph.conf
#
# Timeout value for connecting to Ceph cluster.
#
# This configuration option takes in the timeout value in seconds used
# when connecting to the Ceph cluster i.e. it sets the time to wait for
# glance-api before closing the connection. This prevents glance-api
# hangups during the connection to RBD. If the value for this option
# is set to less than or equal to 0, no timeout is set and the default
# librados value is used.
#
# Possible Values:
# * Any integer value
#
# Related options:
# * None
#
# (integer value)
#rados_connect_timeout = 0
#
# Chunk size for images to be stored in Sheepdog data store.
#
# Provide an integer value representing the size in mebibyte
# (1048576 bytes) to chunk Glance images into. The default
# chunk size is 64 mebibytes.
#
# When using Sheepdog distributed storage system, the images are
# chunked into objects of this size and then stored across the
# distributed data store to use for Glance.
#
# Chunk sizes, if a power of two, help avoid fragmentation and
# enable improved performance.
#
# Possible values:
# * Positive integer value representing size in mebibytes.
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 1
#sheepdog_store_chunk_size = 64
#
# Port number on which the sheep daemon will listen.
#
# Provide an integer value representing a valid port number on
# which you want the Sheepdog daemon to listen on. The default
# port is 7000.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages it receives on
# the port number set using ``sheepdog_store_port`` option to store
# chunks of Glance images.
#
# Possible values:
# * A valid port number (0 to 65535)
#
# Related Options:
# * sheepdog_store_address
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000
#
# Address to bind the Sheepdog daemon to.
#
# Provide a string value representing the address to bind the
# Sheepdog daemon to. The default address set for the 'sheep'
# is 127.0.0.1.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages directed to the
# address set using ``sheepdog_store_address`` option to store
# chunks of Glance images.
#
# Possible values:
# * A valid IPv4 address
# * A valid IPv6 address
# * A valid hostname
#
# Related Options:
# * sheepdog_store_port
#
# (string value)
#sheepdog_store_address = 127.0.0.1
#
# Set verification of the server certificate.
#
# This boolean determines whether or not to verify the server
# certificate. If this option is set to True, swiftclient won't check
# for a valid SSL certificate when authenticating. If the option is set
# to False, then the default CA truststore is used for verification.
#
# Possible values:
# * True
# * False
#
# Related options:
# * swift_store_cacert
#
# (boolean value)
#swift_store_auth_insecure = false
#
# Path to the CA bundle file.
#
# This configuration option enables the operator to specify the path to
# a custom Certificate Authority file for SSL verification when
# connecting to Swift.
#
# Possible values:
# * A valid path to a CA file
#
# Related options:
# * swift_store_auth_insecure
#
# (string value)
#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt
#
# The region of Swift endpoint to use by Glance.
#
# Provide a string value representing a Swift region where Glance
# can connect to for image storage. By default, there is no region
# set.
#
# When Glance uses Swift as the storage backend to store images
# for a specific tenant that has multiple endpoints, setting of a
# Swift region with ``swift_store_region`` allows Glance to connect
# to Swift in the specified region as opposed to a single region
# connectivity.
#
# This option can be configured for both single-tenant and
# multi-tenant storage.
#
# NOTE: Setting the region with ``swift_store_region`` is
# tenant-specific and is necessary ``only if`` the tenant has
# multiple endpoints across different regions.
#
# Possible values:
# * A string value representing a valid Swift region.
#
# Related Options:
# * None
#
# (string value)
#swift_store_region = RegionTwo
#
# The URL endpoint to use for Swift backend storage.
#
# Provide a string value representing the URL endpoint to use for
# storing Glance images in Swift store. By default, an endpoint
# is not set and the storage URL returned by ``auth`` is used.
# Setting an endpoint with ``swift_store_endpoint`` overrides the
# storage URL and is used for Glance image storage.
#
# NOTE: The URL should include the path up to, but excluding the
# container. The location of an object is obtained by appending
# the container and object to the configured URL.
#
# Possible values:
# * String value representing a valid URL path up to a Swift container
#
# Related Options:
# * None
#
# (string value)
#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name
#
# Endpoint Type of Swift service.
#
# This string value indicates the endpoint type to use to fetch the
# Swift endpoint. The endpoint type determines the actions the user will
# be allowed to perform, for instance, reading and writing to the Store.
# This setting is only used if swift_store_auth_version is greater than
# 1.
#
# Possible values:
# * publicURL
# * adminURL
# * internalURL
#
# Related options:
# * swift_store_endpoint
#
# (string value)
# Allowed values: publicURL, adminURL, internalURL
#swift_store_endpoint_type = publicURL
#
# Type of Swift service to use.
#
# Provide a string value representing the service type to use for
# storing images while using Swift backend storage. The default
# service type is set to ``object-store``.
#
# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
# this configuration option needs to be ``object-store``. If using
# a higher version of Keystone or a different auth scheme, this
# option may be modified.
#
# Possible values:
# * A string representing a valid service type for Swift storage.
#
# Related Options:
# * None
#
# (string value)
#swift_store_service_type = object-store
#
# Name of single container to store images/name prefix for multiple containers
#
# When a single container is being used to store images, this configuration
# option indicates the container within the Glance account to be used for
# storing all images. When multiple containers are used to store images, this
# will be the name prefix for all containers. Usage of single/multiple
# containers can be controlled using the configuration option
# ``swift_store_multiple_containers_seed``.
#
# When using multiple containers, the containers will be named after the value
# set for this configuration option with the first N chars of the image UUID
# as the suffix delimited by an underscore (where N is specified by
# ``swift_store_multiple_containers_seed``).
#
# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
# the container ``glance_fda``. All dashes in the UUID are included when
# creating the container name but do not count toward the character limit, so
# when N=10 the container name would be ``glance_fdae39a1-ba.``
#
# Possible values:
# * If using single container, this configuration option can be any string
# that is a valid swift container name in Glance's Swift account
# * If using multiple containers, this configuration option can be any
# string as long as it satisfies the container naming rules enforced by
# Swift. The value of ``swift_store_multiple_containers_seed`` should be
# taken into account as well.
#
# Related options:
# * ``swift_store_multiple_containers_seed``
# * ``swift_store_multi_tenant``
# * ``swift_store_create_container_on_put``
#
# (string value)
#swift_store_container = glance
#
# The size threshold, in MB, after which Glance will start segmenting image
# data.
#
# Swift has an upper limit on the size of a single uploaded object. By default,
# this is 5GB. To upload objects bigger than this limit, objects are segmented
# into multiple smaller objects that are tied together with a manifest file.
# For more detail, refer to
# http://docs.openstack.org/developer/swift/overview_large_objects.html
#
# This configuration option specifies the size threshold over which the Swift
# driver will start segmenting image data into multiple smaller files.
# Currently, the Swift driver only supports creating Dynamic Large Objects.
#
# NOTE: This should be set by taking into account the large object limit
# enforced by the Swift cluster in consideration.
#
# Possible values:
# * A positive integer that is less than or equal to the large object limit
# enforced by the Swift cluster in consideration.
#
# Related options:
# * ``swift_store_large_object_chunk_size``
#
# (integer value)
# Minimum value: 1
#swift_store_large_object_size = 5120
#
# The maximum size, in MB, of the segments when image data is segmented.
#
# When image data is segmented to upload images that are larger than the limit
# enforced by the Swift cluster, image data is broken into segments that are no
# bigger than the size specified by this configuration option.
# Refer to ``swift_store_large_object_size`` for more detail.
#
# For example: if ``swift_store_large_object_size`` is 5GB and
# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
# segmented into 7 segments where the first six segments will be 1GB in size and
# the seventh segment will be 0.2GB.
#
# Possible values:
# * A positive integer that is less than or equal to the large object limit
# enforced by Swift cluster in consideration.
#
# Related options:
# * ``swift_store_large_object_size``
#
# (integer value)
# Minimum value: 1
#swift_store_large_object_chunk_size = 200
#
# Create container, if it doesn't already exist, when uploading image.
#
# At the time of uploading an image, if the corresponding container doesn't
# exist, it will be created provided this configuration option is set to True.
# By default, it won't be created. This behavior is applicable for both single
# and multiple containers mode.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#swift_store_create_container_on_put = false
#
# Store images in tenant's Swift account.
#
# This enables multi-tenant storage mode which causes Glance images to be stored
# in tenant specific Swift accounts. If this is disabled, Glance stores all
# images in its own account. More details multi-tenant store can be found at
# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#swift_store_multi_tenant = false
#
# Seed indicating the number of containers to use for storing images.
#
# When using a single-tenant store, images can be stored in one or more than one
# containers. When set to 0, all images will be stored in one single container.
# When set to an integer value between 1 and 32, multiple containers will be
# used to store images. This configuration option will determine how many
# containers are created. The total number of containers that will be used is
# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
# will be used to store images.
#
# Please refer to ``swift_store_container`` for more detail on the naming
# convention. More detail about using multiple containers can be found at
# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
# multiple-containers.html
#
# NOTE: This is used only when swift_store_multi_tenant is disabled.
#
# Possible values:
# * A non-negative integer less than or equal to 32
#
# Related options:
# * ``swift_store_container``
# * ``swift_store_multi_tenant``
# * ``swift_store_create_container_on_put``
#
# (integer value)
# Minimum value: 0
# Maximum value: 32
#swift_store_multiple_containers_seed = 0
#
# List of tenants that will be granted admin access.
#
# This is a list of tenants that will be granted read/write access on
# all Swift containers created by Glance in multi-tenant mode. The
# default value is an empty list.
#
# Possible values:
# * A comma separated list of strings representing UUIDs of Keystone
# projects/tenants
#
# Related options:
# * None
#
# (list value)
#swift_store_admin_tenants =
#
# SSL layer compression for HTTPS Swift requests.
#
# Provide a boolean value to determine whether or not to compress
# HTTPS Swift requests for images at the SSL layer. By default,
# compression is enabled.
#
# When using Swift as the backend store for Glance image storage,
# SSL layer compression of HTTPS Swift requests can be set using
# this option. If set to False, SSL layer compression of HTTPS
# Swift requests is disabled. Disabling this option may improve
# performance for images which are already in a compressed format,
# for example, qcow2.
#
# Possible values:
# * True
# * False
#
# Related Options:
# * None
#
# (boolean value)
#swift_store_ssl_compression = true
#
# The number of times a Swift download will be retried before the
# request fails.
#
# Provide an integer value representing the number of times an image
# download must be retried before erroring out. The default value is
# zero (no retry on a failed image download). When set to a positive
# integer value, ``swift_store_retry_get_count`` ensures that the
# download is attempted this many more times upon a download failure
# before sending an error message.
#
# Possible values:
# * Zero
# * Positive integer value
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#swift_store_retry_get_count = 0
#
# Time in seconds defining the size of the window in which a new
# token may be requested before the current token is due to expire.
#
# Typically, the Swift storage driver fetches a new token upon the
# expiration of the current token to ensure continued access to
# Swift. However, some Swift transactions (like uploading image
# segments) may not recover well if the token expires on the fly.
#
# Hence, by fetching a new token before the current token expiration,
# we make sure that the token does not expire or is close to expiry
# before a transaction is attempted. By default, the Swift storage
# driver requests for a new token 60 seconds or less before the
# current token expiration.
#
# Possible values:
# * Zero
# * Positive integer value
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#swift_store_expire_soon_interval = 60
#
# Use trusts for multi-tenant Swift store.
#
# This option instructs the Swift store to create a trust for each
# add/get request when the multi-tenant store is in use. Using trusts
# allows the Swift store to avoid problems that can be caused by an
# authentication token expiring during the upload or download of data.
#
# By default, ``swift_store_use_trusts`` is set to ``True``(use of
# trusts is enabled). If set to ``False``, a user token is used for
# the Swift connection instead, eliminating the overhead of trust
# creation.
#
# NOTE: This option is considered only when
# ``swift_store_multi_tenant`` is set to ``True``
#
# Possible values:
# * True
# * False
#
# Related options:
# * swift_store_multi_tenant
#
# (boolean value)
#swift_store_use_trusts = true
#
# Reference to default Swift account/backing store parameters.
#
# Provide a string value representing a reference to the default set
# of parameters required for using swift account/backing store for
# image storage. The default reference value for this configuration
# option is 'ref1'. This configuration option dereferences the
# parameters and facilitates image storage in Swift storage backend
# every time a new image is added.
#
# Possible values:
# * A valid string value
#
# Related options:
# * None
#
# (string value)
#default_swift_reference = ref1
# DEPRECATED: Version of the authentication service to use. Valid versions are 2
# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_version' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_version = 2
# DEPRECATED: The address where the Swift authentication service is listening.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_address' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_address = <None>
# DEPRECATED: The user to authenticate against the Swift authentication service.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'user' in the Swift back-end configuration file is set instead.
#swift_store_user = <None>
# DEPRECATED: Auth key for the user authenticating against the Swift
# authentication service. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'key' in the Swift back-end configuration file is used
# to set the authentication key instead.
#swift_store_key = <None>
#
# Absolute path to the file containing the swift account(s)
# configurations.
#
# Include a string value representing the path to a configuration
# file that has references for each of the configured Swift
# account(s)/backing stores. By default, no file path is specified
# and customized Swift referencing is disabled. Configuring this
# option is highly recommended while using Swift storage backend for
# image storage as it avoids storage of credentials in the database.
#
# Possible values:
# * String value representing an absolute path on the glance-api
# node
#
# Related options:
# * None
#
# (string value)
#swift_store_config_file = <None>
#
# Address of the ESX/ESXi or vCenter Server target system.
#
# This configuration option sets the address of the ESX/ESXi or vCenter
# Server target system. This option is required when using the VMware
# storage backend. The address can contain an IP address (127.0.0.1) or
# a DNS name (www.my-domain.com).
#
# Possible Values:
# * A valid IPv4 or IPv6 address
# * A valid DNS name
#
# Related options:
# * vmware_server_username
# * vmware_server_password
#
# (string value)
#vmware_server_host = 127.0.0.1
#
# Server username.
#
# This configuration option takes the username for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
# * Any string that is the username for a user with appropriate
# privileges
#
# Related options:
# * vmware_server_host
# * vmware_server_password
#
# (string value)
#vmware_server_username = root
#
# Server password.
#
# This configuration option takes the password for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
# * Any string that is a password corresponding to the username
# specified using the "vmware_server_username" option
#
# Related options:
# * vmware_server_host
# * vmware_server_username
#
# (string value)
#vmware_server_password = vmware
#
# The number of VMware API retries.
#
# This configuration option specifies the number of times the VMware
# ESX/VC server API must be retried upon connection related issues or
# server API call overload. It is not possible to specify 'retry
# forever'.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#vmware_api_retry_count = 10
#
# Interval in seconds used for polling remote tasks invoked on VMware
# ESX/VC server.
#
# This configuration option takes in the sleep time in seconds for polling an
# on-going async task as part of the VMWare ESX/VC server API call.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#vmware_task_poll_interval = 5
#
# The directory where the glance images will be stored in the datastore.
#
# This configuration option specifies the path to the directory where the
# glance images will be stored in the VMware datastore. If this option
# is not set, the default directory where the glance images are stored
# is openstack_glance.
#
# Possible Values:
# * Any string that is a valid path to a directory
#
# Related options:
# * None
#
# (string value)
#vmware_store_image_dir = /openstack_glance
#
# Set verification of the ESX/vCenter server certificate.
#
# This configuration option takes a boolean value to determine
# whether or not to verify the ESX/vCenter server certificate. If this
# option is set to True, the ESX/vCenter server certificate is not
# verified. If this option is set to False, then the default CA
# truststore is used for verification.
#
# This option is ignored if the "vmware_ca_file" option is set. In that
# case, the ESX/vCenter server certificate will then be verified using
# the file specified using the "vmware_ca_file" option .
#
# Possible Values:
# * True
# * False
#
# Related options:
# * vmware_ca_file
#
# (boolean value)
# Deprecated group/name - [glance_store]/vmware_api_insecure
#vmware_insecure = false
#
# Absolute path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Cerificate Authority File to verify the ESX/vCenter certificate.
#
# If this option is set, the "vmware_insecure" option will be ignored
# and the CA file specified will be used to authenticate the ESX/vCenter
# server certificate and establish a secure connection to the server.
#
# Possible Values:
# * Any string that is a valid absolute path to a CA file
#
# Related options:
# * vmware_insecure
#
# (string value)
#vmware_ca_file = /etc/ssl/certs/ca-certificates.crt
#
# The datastores where the image can be stored.
#
# This configuration option specifies the datastores where the image can
# be stored in the VMWare store backend. This option may be specified
# multiple times for specifying multiple datastores. The datastore name
# should be specified after its datacenter path, separated by ":". An
# optional weight may be given after the datastore name, separated again
# by ":" to specify the priority. Thus, the required format becomes
# <datacenter_path>:<datastore_name>:<optional_weight>.
#
# When adding an image, the datastore with highest weight will be
# selected, unless there is not enough free space available in cases
# where the image size is already known. If no weight is given, it is
# assumed to be zero and the directory will be considered for selection
# last. If multiple datastores have the same weight, then the one with
# the most free space available is selected.
#
# Possible Values:
# * Any string of the format:
# <datacenter_path>:<datastore_name>:<optional_weight>
#
# Related options:
# * None
#
# (multi valued)
#vmware_datastores =
[image_format]
#
# From glance.api
#
# Supported values for the 'container_format' image attribute (list value)
# Deprecated group/name - [DEFAULT]/container_formats
#container_formats = ami,ari,aki,bare,ovf,ova,docker
# Supported values for the 'disk_format' image attribute (list value)
# Deprecated group/name - [DEFAULT]/disk_formats
#disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
# clients are redirected to this endpoint to authenticate. Although this
# endpoint should ideally be unversioned, client support in the wild varies.
# If you're using a versioned v2 endpoint here, then this should *not* be the
# same endpoint the service user utilizes for validating tokens, because normal
# end users may not be able to reach that endpoint. (string value)
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity API
# Server. (integer value)
#http_request_max_retries = 3
# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set to
# -1 to disable caching completely. (integer value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is retrieved from
# the Identity service (in seconds). A high number of revocation events combined
# with a low cache duration may significantly reduce performance. Only valid for
# PKI tokens. (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached server.
# (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it if
# not. "strict" like "permissive" but if the bind type is unknown the token will
# be rejected. "required" any form of token binding is needed to be allowed.
# Finally the name of a binding method that must be present in tokens. (string
# value)
#enforce_token_bind = permissive
# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false
# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
[matchmaker_redis]
#
# From oslo.messaging
#
# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1
# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379
# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =
# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000
[oslo_concurrency]
#
# From oslo.concurrency
#
# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false
# Directory to use for lock files. For security, the specified directory should
# only be writable by the user running the processes that need locking. Defaults
# to environment variable OSLO_LOCK_PATH. If external locks are used, a lock
# path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1
# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2
# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30
# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10
# The deadline for an rpc reply message delivery. Only used when caller does not
# provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30
# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30
# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30
# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy' - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic' - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc
# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify
# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast
# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast
# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-robin
# fashion across consumers. (string value)
#anycast_address = anycast
# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>
# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>
# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200
# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100
# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100
[oslo_messaging_notifications]
#
# From oslo.messaging
#
# The Drivers(s) to handle sending notifications. Possible values are messaging,
# messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>
# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than one
# RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost
# DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672
# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest
# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30
# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0
# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue. If
# you just want to make sure that all queues (except those with auto-generated
# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically deleted.
# The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800
# Specifies the number of messages to prefetch. Setting to zero allows unlimited
# messages. (integer value)
#rabbit_qos_prefetch_count = 0
# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
# How often times during the heartbeat_timeout_threshold we check the heartbeat.
# (integer value)
#heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
# Maximum number of channels to allow (integer value)
#channel_max = <None>
# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>
# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3
# Enable SSL (boolean value)
#ssl = <None>
# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>
# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25
# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value)
#tcp_user_timeout = 0.25
# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25
# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single
# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30
# Maximum number of connections to create above `pool_max_size`. (integer value)
#pool_max_overflow = 0
# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30
# Lifetime of a connection (since creation) in seconds or None for no recycling.
# Expired connections are closed on acquire. (integer value)
#pool_recycle = 600
# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60
# Persist notification messages. (boolean value)
#notification_persistence = false
# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification
# Max number of not acknowledged message which RabbitMQ can send to notification
# listener. (integer value)
#notification_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25
# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60
# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc
# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply
# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100
# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending reply.
# -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending reply.
# (floating point value)
#rpc_reply_retry_delay = 0.25
# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25
[oslo_messaging_zmq]
#
# From oslo.messaging
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
[oslo_middleware]
#
# From oslo.middleware.http_proxy_to_wsgi
#
# Whether the application is behind a proxy or not. This determines if the
# middleware should parse the headers or not. (boolean value)
#enable_proxy_headers_parsing = false
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
[paste_deploy]
#
# From glance.api
#
#
# Deployment flavor to use in the server application pipeline.
#
# Provide a string value representing the appropriate deployment
# flavor used in the server application pipleline. This is typically
# the partial name of a pipeline in the paste configuration file with
# the service name removed.
#
# For example, if your paste section name in the paste configuration
# file is [pipeline:glance-api-keystone], set ``flavor`` to
# ``keystone``.
#
# Possible values:
# * String value representing a partial pipeline name.
#
# Related Options:
# * config_file
#
# (string value)
#flavor = keystone
#
# Name of the paste configuration file.
#
# Provide a string value representing the name of the paste
# configuration file to use for configuring piplelines for
# server application deployments.
#
# NOTES:
# * Provide the name or the path relative to the glance directory
# for the paste configuration file and not the absolute path.
# * The sample paste configuration file shipped with Glance need
# not be edited in most cases as it comes with ready-made
# pipelines for all common deployment flavors.
#
# If no value is specified for this option, the ``paste.ini`` file
# with the prefix of the corresponding Glance service's configuration
# file name will be searched for in the known configuration
# directories. (For example, if this option is missing from or has no
# value set in ``glance-api.conf``, the service will look for a file
# named ``glance-api-paste.ini``.) If the paste configuration file is
# not found, the service will not start.
#
# Possible values:
# * A string value representing the name of the paste configuration
# file.
#
# Related Options:
# * flavor
#
# (string value)
#config_file = glance-api-paste.ini
[profiler]
#
# From glance.api
#
#
# Enables the profiling for all services on this node. Default value is False
# (fully disable the profiling feature).
#
# Possible values:
#
# * True: Enables the feature
# * False: Disables the feature. The profiling cannot be started via this
# project
# operations. If the profiling is triggered by another project, this project
# part
# will be empty.
# (boolean value)
# Deprecated group/name - [profiler]/profiler_enabled
#enabled = false
#
# Enables SQL requests profiling in services. Default value is False (SQL
# requests won't be traced).
#
# Possible values:
#
# * True: Enables SQL requests profiling. Each SQL query will be part of the
# trace and can the be analyzed by how much time was spent for that.
# * False: Disables SQL requests profiling. The spent time is only shown on a
# higher level of operations. Single SQL queries cannot be analyzed this
# way.
# (boolean value)
#trace_sqlalchemy = false
#
# Secret key(s) to use for encrypting context data for performance profiling.
# This string value should have the following format: <key1>[,<key2>,...<keyn>],
# where each key is some random string. A user who triggers the profiling via
# the REST API has to set one of these keys in the headers of the REST API call
# to include profiling results of this node for this particular project.
#
# Both "enabled" flag and "hmac_keys" config options should be set to enable
# profiling. Also, to generate correct profiling information across all services
# at least one key needs to be consistent between OpenStack projects. This
# ensures it can be used from client side to generate the trace, containing
# information from all possible resources. (string value)
#hmac_keys = SECRET_KEY
#
# Connection string for a notifier backend. Default value is messaging:// which
# sets the notifier to oslo_messaging.
#
# Examples of possible values:
#
# * messaging://: use oslo_messaging driver for sending notifications.
# (string value)
#connection_string = messaging://
[store_type_location_strategy]
#
# From glance.api
#
#
# Preference order of storage backends.
#
# Provide a comma separated list of store names in the order in
# which images should be retrieved from storage backends.
# These store names must be registered with the ``stores``
# configuration option.
#
# NOTE: The ``store_type_preference`` configuration option is applied
# only if ``store_type`` is chosen as a value for the
# ``location_strategy`` configuration option. An empty list will not
# change the location order.
#
# Possible values:
# * Empty list
# * Comma separated list of registered store names. Legal values are:
# * file
# * http
# * rbd
# * swift
# * sheepdog
# * cinder
# * vmware
#
# Related options:
# * location_strategy
# * stores
#
# (list value)
#store_type_preference =
[task]
#
# From glance.api
#
# Time in hours for which a task lives after, either succeeding or failing
# (integer value)
# Deprecated group/name - [DEFAULT]/task_time_to_live
#task_time_to_live = 48
#
# Task executor to be used to run task scripts.
#
# Provide a string value representing the executor to use for task
# executions. By default, ``TaskFlow`` executor is used.
#
# ``TaskFlow`` helps make task executions easy, consistent, scalable
# and reliable. It also enables creation of lightweight task objects
# and/or functions that are combined together into flows in a
# declarative manner.
#
# Possible values:
# * taskflow
#
# Related Options:
# * None
#
# (string value)
#task_executor = taskflow
#
# Absolute path to the work directory to use for asynchronous
# task operations.
#
# The directory set here will be used to operate over images -
# normally before they are imported in the destination store.
#
# NOTE: When providing a value for ``work_dir``, please make sure
# that enough space is provided for concurrent tasks to run
# efficiently without running out of space.
#
# A rough estimation can be done by multiplying the number of
# ``max_workers`` with an average image size (e.g 500MB). The image
# size estimation should be done based on the average size in your
# deployment. Note that depending on the tasks running you may need
# to multiply this number by some factor depending on what the task
# does. For example, you may want to double the available size if
# image conversion is enabled. All this being said, remember these
# are just estimations and you should do them based on the worst
# case scenario and be prepared to act in case they were wrong.
#
# Possible values:
# * String value representing the absolute path to the working
# directory
#
# Related Options:
# * None
#
# (string value)
#work_dir = /work_dir
[taskflow_executor]
#
# From glance.api
#
#
# Set the taskflow engine mode.
#
# Provide a string type value to set the mode in which the taskflow
# engine would schedule tasks to the workers on the hosts. Based on
# this mode, the engine executes tasks either in single or multiple
# threads. The possible values for this configuration option are:
# ``serial`` and ``parallel``. When set to ``serial``, the engine runs
# all the tasks in a single thread which results in serial execution
# of tasks. Setting this to ``parallel`` makes the engine run tasks in
# multiple threads. This results in parallel execution of tasks.
#
# Possible values:
# * serial
# * parallel
#
# Related options:
# * max_workers
#
# (string value)
# Allowed values: serial, parallel
#engine_mode = parallel
#
# Set the number of engine executable tasks.
#
# Provide an integer value to limit the number of workers that can be
# instantiated on the hosts. In other words, this number defines the
# number of parallel tasks that can be executed at the same time by
# the taskflow engine. This value can be greater than one when the
# engine mode is set to parallel.
#
# Possible values:
# * Integer value greater than or equal to 1
#
# Related options:
# * engine_mode
#
# (integer value)
# Minimum value: 1
# Deprecated group/name - [task]/eventlet_executor_pool_size
#max_workers = 10
#
# Set the desired image conversion format.
#
# Provide a valid image format to which you want images to be
# converted before they are stored for consumption by Glance.
# Appropriate image format conversions are desirable for specific
# storage backends in order to facilitate efficient handling of
# bandwidth and usage of the storage infrastructure.
#
# By default, ``conversion_format`` is not set and must be set
# explicitly in the configuration file.
#
# The allowed values for this option are ``raw``, ``qcow2`` and
# ``vmdk``. The ``raw`` format is the unstructured disk format and
# should be chosen when RBD or Ceph storage backends are used for
# image storage. ``qcow2`` is supported by the QEMU emulator that
# expands dynamically and supports Copy on Write. The ``vmdk`` is
# another common disk format supported by many common virtual machine
# monitors like VMWare Workstation.
#
# Possible values:
# * qcow2
# * raw
# * vmdk
#
# Related options:
# * disk_formats
#
# (string value)
# Allowed values: qcow2, raw, vmdk
#conversion_format = raw
Configuration for the Image service’s API middleware pipeline is found in the
glance-api-paste.ini
file.
You should not need to modify this file.
# Use this pipeline for no auth or image caching - DEFAULT
[pipeline:glance-api]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context rootapp
# Use this pipeline for image caching and no auth
[pipeline:glance-api-caching]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context cache rootapp
# Use this pipeline for caching w/ management interface but no auth
[pipeline:glance-api-cachemanagement]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler unauthenticated-context cache cachemanage rootapp
# Use this pipeline for keystone auth
[pipeline:glance-api-keystone]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context rootapp
# Use this pipeline for keystone auth with image caching
[pipeline:glance-api-keystone+caching]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context cache rootapp
# Use this pipeline for keystone auth with caching and cache management
[pipeline:glance-api-keystone+cachemanagement]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context cache cachemanage rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-api-trusted-auth]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler context rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user and uses cache management
[pipeline:glance-api-trusted-auth+cachemanagement]
pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler context cache cachemanage rootapp
[composite:rootapp]
paste.composite_factory = glance.api:root_app_factory
/: apiversions
/v1: apiv1app
/v2: apiv2app
[app:apiversions]
paste.app_factory = glance.api.versions:create_resource
[app:apiv1app]
paste.app_factory = glance.api.v1.router:API.factory
[app:apiv2app]
paste.app_factory = glance.api.v2.router:API.factory
[filter:healthcheck]
paste.filter_factory = oslo_middleware:Healthcheck.factory
backends = disable_by_file
disable_by_file_path = /etc/glance/healthcheck_disable
[filter:versionnegotiation]
paste.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter.factory
[filter:cache]
paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory
[filter:cachemanage]
paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
delay_auth_decision = true
[filter:gzip]
paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory
[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
hmac_keys = SECRET_KEY #DEPRECATED
enabled = yes #DEPRECATED
[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = glance
oslo_config_program = glance-api
[filter:http_proxy_to_wsgi]
paste.filter_factory = oslo_middleware:HTTPProxyToWSGI.factory
The configuration options for an optional local image cache
are found in the glance-cache.conf
file.
[DEFAULT]
#
# From glance.cache
#
#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * image_property_quota
#
# (boolean value)
#allow_additional_image_properties = true
#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_member_quota = 128
#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
# * ``allow_additional_image_properties``
#
# (integer value)
#image_property_quota = 128
#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_tag_quota = 128
#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_location_quota = 10
#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
# * glance.db.sqlalchemy.api
# * glance.db.registry.api
# * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
# * enable_v2_api
# * enable_v2_registry
#
# (string value)
#data_api = glance.db.sqlalchemy.api
#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
# * The value of this configuration option may not be greater than
# the value specified by ``api_limit_max``.
# * Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * api_limit_max
#
# (integer value)
# Minimum value: 1
#limit_param_default = 25
#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * limit_param_default
#
# (integer value)
# Minimum value: 1
#api_limit_max = 1000
#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_multiple_locations
# * location_strategy
#
# (boolean value)
#show_image_direct_url = false
# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_image_direct_url
# * location_strategy
#
# (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false
#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
# * This value should only be increased after careful
# consideration and must be set less than or equal to
# 8 EiB (9223372036854775808).
# * This value must be set with careful consideration of the
# backend storage capacity. Setting this to a very low value
# may result in a large number of image failures. And, setting
# this to a very large value may result in faster consumption
# of storage. Hence, this must be set according to the nature of
# images created and storage capacity available.
#
# Possible values:
# * Any positive number less than or equal to 9223372036854775808
#
# (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776
#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
# * A string that is a valid concatenation of a non-negative integer
# representing the storage value and an optional string literal
# representing storage units as mentioned above.
#
# Related options:
# * None
#
# (string value)
#user_storage_quota = 0
#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
# * If this option is enabled, then ``enable_v1_registry`` must
# also be set to ``True`` to enable mandatory usage of Registry
# service with v1 API.
#
# * If this option is disabled, then the ``enable_v1_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v2_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v2 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_registry
# * enable_v2_api
#
# (boolean value)
#enable_v1_api = true
#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
# * If this option is disabled, then the ``enable_v2_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v1_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v1 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_registry
# * enable_v1_api
#
# (boolean value)
#enable_v2_api = true
#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
# * Use of Registry is mandatory in v1 API, so this option must
# be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
# * If deploying only the v2 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_api
#
# (boolean value)
#enable_v1_registry = true
#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
# * Use of Registry is optional in v2 API, so this option
# must only be enabled if both ``enable_v2_api`` is set to
# ``True`` and the ``data_api`` option is set to
# ``glance.db.registry.api``.
#
# * If deploying only the v1 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_api
# * data_api
#
# (boolean value)
#enable_v2_registry = true
#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
# * Valid hostname
# * Valid IP address
#
# Related options:
# * None
#
# (string value)
#pydev_worker_debug_host = localhost
#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
# * A valid port number
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678
#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
# * String value representing a valid AES key
#
# Related options:
# * None
#
# (string value)
#metadata_encryption_key = <None>
#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
# * An OpenSSL message digest algorithm identifier
#
# Relation options:
# * None
#
# (string value)
#digest_algorithm = sha256
#
# The relative path to sqlite file database that will be used for image cache
# management.
#
# This is a relative path to the sqlite file database that tracks the age and
# usage statistics of image cache. The path is relative to image cache base
# directory, specified by the configuration option ``image_cache_dir``.
#
# This is a lightweight database with just one table.
#
# Possible values:
# * A valid relative path to sqlite file database
#
# Related options:
# * ``image_cache_dir``
#
# (string value)
#image_cache_sqlite_db = cache.db
#
# The driver to use for image cache management.
#
# This configuration option provides the flexibility to choose between the
# different image-cache drivers available. An image-cache driver is responsible
# for providing the essential functions of image-cache like write images to/read
# images from cache, track age and usage of cached images, provide a list of
# cached images, fetch size of the cache, queue images for caching and clean up
# the cache, etc.
#
# The essential functions of a driver are defined in the base class
# ``glance.image_cache.drivers.base.Driver``. All image-cache drivers (existing
# and prospective) must implement this interface. Currently available drivers
# are ``sqlite`` and ``xattr``. These drivers primarily differ in the way they
# store the information about cached images:
# * The ``sqlite`` driver uses a sqlite database (which sits on every glance
# node locally) to track the usage of cached images.
# * The ``xattr`` driver uses the extended attributes of files to store this
# information. It also requires a filesystem that sets ``atime`` on the
# files
# when accessed.
#
# Possible values:
# * sqlite
# * xattr
#
# Related options:
# * None
#
# (string value)
# Allowed values: sqlite, xattr
#image_cache_driver = sqlite
#
# The upper limit on cache size, in bytes, after which the cache-pruner cleans
# up the image cache.
#
# NOTE: This is just a threshold for cache-pruner to act upon. It is NOT a
# hard limit beyond which the image cache would never grow. In fact, depending
# on how often the cache-pruner runs and how quickly the cache fills, the image
# cache can far exceed the size specified here very easily. Hence, care must be
# taken to appropriately schedule the cache-pruner and in setting this limit.
#
# Glance caches an image when it is downloaded. Consequently, the size of the
# image cache grows over time as the number of downloads increases. To keep the
# cache size from becoming unmanageable, it is recommended to run the
# cache-pruner as a periodic task. When the cache pruner is kicked off, it
# compares the current size of image cache and triggers a cleanup if the image
# cache grew beyond the size specified here. After the cleanup, the size of
# cache is less than or equal to size specified here.
#
# Possible values:
# * Any non-negative integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#image_cache_max_size = 10737418240
#
# The amount of time, in seconds, an incomplete image remains in the cache.
#
# Incomplete images are images for which download is in progress. Please see the
# description of configuration option ``image_cache_dir`` for more detail.
# Sometimes, due to various reasons, it is possible the download may hang and
# the incompletely downloaded image remains in the ``incomplete`` directory.
# This configuration option sets a time limit on how long the incomplete images
# should remain in the ``incomplete`` directory before they are cleaned up.
# Once an incomplete image spends more time than is specified here, it'll be
# removed by cache-cleaner on its next run.
#
# It is recommended to run cache-cleaner as a periodic task on the Glance API
# nodes to keep the incomplete images from occupying disk space.
#
# Possible values:
# * Any non-negative integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#image_cache_stall_time = 86400
#
# Base directory for image cache.
#
# This is the location where image data is cached and served out of. All cached
# images are stored directly under this directory. This directory also contains
# three subdirectories, namely, ``incomplete``, ``invalid`` and ``queue``.
#
# The ``incomplete`` subdirectory is the staging area for downloading images. An
# image is first downloaded to this directory. When the image download is
# successful it is moved to the base directory. However, if the download fails,
# the partially downloaded image file is moved to the ``invalid`` subdirectory.
#
# The ``queue``subdirectory is used for queuing images for download. This is
# used primarily by the cache-prefetcher, which can be scheduled as a periodic
# task like cache-pruner and cache-cleaner, to cache images ahead of their
# usage.
# Upon receiving the request to cache an image, Glance touches a file in the
# ``queue`` directory with the image id as the file name. The cache-prefetcher,
# when running, polls for the files in ``queue`` directory and starts
# downloading them in the order they were created. When the download is
# successful, the zero-sized file is deleted from the ``queue`` directory.
# If the download fails, the zero-sized file remains and it'll be retried the
# next time cache-prefetcher runs.
#
# Possible values:
# * A valid path
#
# Related options:
# * ``image_cache_sqlite_db``
#
# (string value)
#image_cache_dir = <None>
#
# Address the registry server is hosted on.
#
# Possible values:
# * A valid IP or hostname
#
# Related options:
# * None
#
# (string value)
#registry_host = 0.0.0.0
#
# Port the registry server is listening on.
#
# Possible values:
# * A valid port number
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#registry_port = 9191
#
# Protocol to use for communication with the registry server.
#
# Provide a string value representing the protocol to use for
# communication with the registry server. By default, this option is
# set to ``http`` and the connection is not secure.
#
# This option can be set to ``https`` to establish a secure connection
# to the registry server. In this case, provide a key to use for the
# SSL connection using the ``registry_client_key_file`` option. Also
# include the CA file and cert file using the options
# ``registry_client_ca_file`` and ``registry_client_cert_file``
# respectively.
#
# Possible values:
# * http
# * https
#
# Related options:
# * registry_client_key_file
# * registry_client_cert_file
# * registry_client_ca_file
#
# (string value)
# Allowed values: http, https
#registry_client_protocol = http
#
# Absolute path to the private key file.
#
# Provide a string value representing a valid absolute path to the
# private key file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
# environment variable may be set to a filepath of the key file.
#
# Possible values:
# * String value representing a valid absolute path to the key
# file.
#
# Related options:
# * registry_client_protocol
#
# (string value)
#registry_client_key_file = /etc/ssl/key/key-file.pem
#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
# environment variable may be set to a filepath of the certificate
# file.
#
# Possible values:
# * String value representing a valid absolute path to the
# certificate file.
#
# Related options:
# * registry_client_protocol
#
# (string value)
#registry_client_cert_file = /etc/ssl/certs/file.crt
#
# Absolute path to the Certificate Authority file.
#
# Provide a string value representing a valid absolute path to the
# certificate authority file to use for establishing a secure
# connection to the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
# environment variable may be set to a filepath of the CA file.
# This option is ignored if the ``registry_client_insecure`` option
# is set to ``True``.
#
# Possible values:
# * String value representing a valid absolute path to the CA
# file.
#
# Related options:
# * registry_client_protocol
# * registry_client_insecure
#
# (string value)
#registry_client_ca_file = /etc/ssl/cafile/file.ca
#
# Set verification of the registry server certificate.
#
# Provide a boolean value to determine whether or not to validate
# SSL connections to the registry server. By default, this option
# is set to ``False`` and the SSL connections are validated.
#
# If set to ``True``, the connection to the registry server is not
# validated via a certifying authority and the
# ``registry_client_ca_file`` option is ignored. This is the
# registry's equivalent of specifying --insecure on the command line
# using glanceclient for the API.
#
# Possible values:
# * True
# * False
#
# Related options:
# * registry_client_protocol
# * registry_client_ca_file
#
# (boolean value)
#registry_client_insecure = false
#
# Timeout value for registry requests.
#
# Provide an integer value representing the period of time in seconds
# that the API server will wait for a registry request to complete.
# The default value is 600 seconds.
#
# A value of 0 implies that a request will never timeout.
#
# Possible values:
# * Zero
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#registry_client_timeout = 600
# DEPRECATED: Whether to pass through the user token when making requests to the
# registry. To prevent failures with token expiration during big files upload,
# it is recommended to set this parameter to False.If "use_user_token" is not in
# effect, then admin credentials can be specified. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#use_user_token = true
# DEPRECATED: The administrators user name. If "use_user_token" is not in
# effect, then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_user = <None>
# DEPRECATED: The administrators password. If "use_user_token" is not in effect,
# then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_password = <None>
# DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
# not in effect, then admin tenant name can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_tenant_name = <None>
# DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
# effect and using keystone auth, then URL of keystone can be specified. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_url = <None>
# DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
# in effect, then auth strategy can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_strategy = noauth
# DEPRECATED: The region for the authentication service. If "use_user_token" is
# not in effect and using keystone auth, then region name can be specified.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_region = <None>
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[glance_store]
#
# From glance.store
#
#
# List of enabled Glance stores.
#
# Register the storage backends to use for storing disk images
# as a comma separated list. The default stores enabled for
# storing disk images with Glance are ``file`` and ``http``.
#
# Possible values:
# * A comma separated list that could include:
# * file
# * http
# * swift
# * rbd
# * sheepdog
# * cinder
# * vmware
#
# Related Options:
# * default_store
#
# (list value)
#stores = file,http
#
# The default scheme to use for storing images.
#
# Provide a string value representing the default scheme to use for
# storing images. If not set, Glance uses ``file`` as the default
# scheme to store images with the ``file`` store.
#
# NOTE: The value given for this configuration option must be a valid
# scheme for a store registered with the ``stores`` configuration
# option.
#
# Possible values:
# * file
# * filesystem
# * http
# * https
# * swift
# * swift+http
# * swift+https
# * swift+config
# * rbd
# * sheepdog
# * cinder
# * vsphere
#
# Related Options:
# * stores
#
# (string value)
# Allowed values: file, filesystem, http, https, swift, swift+http, swift+https, swift+config, rbd, sheepdog, cinder, vsphere
#default_store = file
#
# Minimum interval in seconds to execute updating dynamic storage
# capabilities based on current backend status.
#
# Provide an integer value representing time in seconds to set the
# minimum interval before an update of dynamic storage capabilities
# for a storage backend can be attempted. Setting
# ``store_capabilities_update_min_interval`` does not mean updates
# occur periodically based on the set interval. Rather, the update
# is performed at the elapse of this interval set, if an operation
# of the store is triggered.
#
# By default, this option is set to zero and is disabled. Provide an
# integer value greater than zero to enable this option.
#
# NOTE: For more information on store capabilities and their updates,
# please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo
# /store-capabilities.html
#
# For more information on setting up a particular store in your
# deplyment and help with the usage of this feature, please contact
# the storage driver maintainers listed here:
# http://docs.openstack.org/developer/glance_store/drivers/index.html
#
# Possible values:
# * Zero
# * Positive integer
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#store_capabilities_update_min_interval = 0
#
# Information to match when looking for cinder in the service catalog.
#
# When the ``cinder_endpoint_template`` is not set and any of
# ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, ``cinder_store_password`` is not set,
# cinder store uses this information to lookup cinder endpoint from the service
# catalog in the current context. ``cinder_os_region_name``, if set, is taken
# into consideration to fetch the appropriate endpoint.
#
# The service catalog can be listed by the ``openstack catalog list`` command.
#
# Possible values:
# * A string of of the following form:
# ``<service_type>:<service_name>:<endpoint_type>``
# At least ``service_type`` and ``endpoint_type`` should be specified.
# ``service_name`` can be omitted.
#
# Related options:
# * cinder_os_region_name
# * cinder_endpoint_template
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
# * cinder_store_password
#
# (string value)
#cinder_catalog_info = volumev2::publicURL
#
# Override service catalog lookup with template for cinder endpoint.
#
# When this option is set, this value is used to generate cinder endpoint,
# instead of looking up from the service catalog.
# This value is ignored if ``cinder_store_auth_address``,
# ``cinder_store_user_name``, ``cinder_store_project_name``, and
# ``cinder_store_password`` are specified.
#
# If this configuration option is set, ``cinder_catalog_info`` will be ignored.
#
# Possible values:
# * URL template string for cinder endpoint, where ``%%(tenant)s`` is
# replaced with the current tenant (project) name.
# For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s``
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
# * cinder_store_password
# * cinder_catalog_info
#
# (string value)
#cinder_endpoint_template = <None>
#
# Region name to lookup cinder service from the service catalog.
#
# This is used only when ``cinder_catalog_info`` is used for determining the
# endpoint. If set, the lookup for cinder endpoint by this node is filtered to
# the specified region. It is useful when multiple regions are listed in the
# catalog. If this is not set, the endpoint is looked up from every region.
#
# Possible values:
# * A string that is a valid region name.
#
# Related options:
# * cinder_catalog_info
#
# (string value)
# Deprecated group/name - [glance_store]/os_region_name
#cinder_os_region_name = <None>
#
# Location of a CA certificates file used for cinder client requests.
#
# The specified CA certificates file, if set, is used to verify cinder
# connections via HTTPS endpoint. If the endpoint is HTTP, this value is
# ignored.
# ``cinder_api_insecure`` must be set to ``True`` to enable the verification.
#
# Possible values:
# * Path to a ca certificates file
#
# Related options:
# * cinder_api_insecure
#
# (string value)
#cinder_ca_certificates_file = <None>
#
# Number of cinderclient retries on failed http calls.
#
# When a call failed by any errors, cinderclient will retry the call up to the
# specified times after sleeping a few seconds.
#
# Possible values:
# * A positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#cinder_http_retries = 3
#
# Time period, in seconds, to wait for a cinder volume transition to
# complete.
#
# When the cinder volume is created, deleted, or attached to the glance node to
# read/write the volume data, the volume's state is changed. For example, the
# newly created volume status changes from ``creating`` to ``available`` after
# the creation process is completed. This specifies the maximum time to wait for
# the status change. If a timeout occurs while waiting, or the status is changed
# to an unexpected value (e.g. `error``), the image creation fails.
#
# Possible values:
# * A positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#cinder_state_transition_timeout = 300
#
# Allow to perform insecure SSL requests to cinder.
#
# If this option is set to True, HTTPS endpoint connection is verified using the
# CA certificates file specified by ``cinder_ca_certificates_file`` option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * cinder_ca_certificates_file
#
# (boolean value)
#cinder_api_insecure = false
#
# The address where the cinder authentication service is listening.
#
# When all of ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, and ``cinder_store_password`` options are
# specified, the specified values are always used for the authentication.
# This is useful to hide the image volumes from users by storing them in a
# project/tenant specific to the image service. It also enables users to share
# the image volume among other projects under the control of glance's ACL.
#
# If either of these options are not set, the cinder endpoint is looked up
# from the service catalog, and current context's user and project are used.
#
# Possible values:
# * A valid authentication service address, for example:
# ``http://openstack.example.org/identity/v2.0``
#
# Related options:
# * cinder_store_user_name
# * cinder_store_password
# * cinder_store_project_name
#
# (string value)
#cinder_store_auth_address = <None>
#
# User name to authenticate against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
# * A valid user name
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_password
# * cinder_store_project_name
#
# (string value)
#cinder_store_user_name = <None>
#
# Password for the user authenticating against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
# * A valid password for the user specified by ``cinder_store_user_name``
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
#
# (string value)
#cinder_store_password = <None>
#
# Project name where the image volume is stored in cinder.
#
# If this configuration option is not set, the project in current context is
# used.
#
# This must be used with all the following related options. If any of these are
# not specified, the project of the current context is used.
#
# Possible values:
# * A valid project name
#
# Related options:
# * ``cinder_store_auth_address``
# * ``cinder_store_user_name``
# * ``cinder_store_password``
#
# (string value)
#cinder_store_project_name = <None>
#
# Path to the rootwrap configuration file to use for running commands as root.
#
# The cinder store requires root privileges to operate the image volumes (for
# connecting to iSCSI/FC volumes and reading/writing the volume data, etc.).
# The configuration file should allow the required commands by cinder store and
# os-brick library.
#
# Possible values:
# * Path to the rootwrap config file
#
# Related options:
# * None
#
# (string value)
#rootwrap_config = /etc/glance/rootwrap.conf
#
# Directory to which the filesystem backend store writes images.
#
# Upon start up, Glance creates the directory if it doesn't already
# exist and verifies write access to the user under which
# ``glance-api`` runs. If the write access isn't available, a
# ``BadStoreConfiguration`` exception is raised and the filesystem
# store may not be available for adding new images.
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
# * A valid path to a directory
#
# Related options:
# * ``filesystem_store_datadirs``
# * ``filesystem_store_file_perm``
#
# (string value)
#filesystem_store_datadir = /var/lib/glance/images
#
# List of directories and their priorities to which the filesystem
# backend store writes images.
#
# The filesystem store can be configured to store images in multiple
# directories as opposed to using a single directory specified by the
# ``filesystem_store_datadir`` configuration option. When using
# multiple directories, each directory can be given an optional
# priority to specify the preference order in which they should
# be used. Priority is an integer that is concatenated to the
# directory path with a colon where a higher value indicates higher
# priority. When two directories have the same priority, the directory
# with most free space is used. When no priority is specified, it
# defaults to zero.
#
# More information on configuring filesystem store with multiple store
# directories can be found at
# http://docs.openstack.org/developer/glance/configuring.html
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
# * List of strings of the following form:
# * ``<a valid directory path>:<optional integer priority>``
#
# Related options:
# * ``filesystem_store_datadir``
# * ``filesystem_store_file_perm``
#
# (multi valued)
#filesystem_store_datadirs =
#
# Filesystem store metadata file.
#
# The path to a file which contains the metadata to be returned with
# any location associated with the filesystem store. The file must
# contain a valid JSON object. The object should contain the keys
# ``id`` and ``mountpoint``. The value for both keys should be a
# string.
#
# Possible values:
# * A valid path to the store metadata file
#
# Related options:
# * None
#
# (string value)
#filesystem_store_metadata_file = <None>
#
# File access permissions for the image files.
#
# Set the intended file access permissions for image data. This provides
# a way to enable other services, e.g. Nova, to consume images directly
# from the filesystem store. The users running the services that are
# intended to be given access to could be made a member of the group
# that owns the files created. Assigning a value less then or equal to
# zero for this configuration option signifies that no changes be made
# to the default permissions. This value will be decoded as an octal
# digit.
#
# For more information, please refer the documentation at
# http://docs.openstack.org/developer/glance/configuring.html
#
# Possible values:
# * A valid file access permission
# * Zero
# * Any negative integer
#
# Related options:
# * None
#
# (integer value)
#filesystem_store_file_perm = 0
#
# Path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Certificate Authority file to verify the remote server certificate. If
# this option is set, the ``https_insecure`` option will be ignored and
# the CA file specified will be used to authenticate the server
# certificate and establish a secure connection to the server.
#
# Possible values:
# * A valid path to a CA file
#
# Related options:
# * https_insecure
#
# (string value)
#https_ca_certificates_file = <None>
#
# Set verification of the remote server certificate.
#
# This configuration option takes in a boolean value to determine
# whether or not to verify the remote server certificate. If set to
# True, the remote server certificate is not verified. If the option is
# set to False, then the default CA truststore is used for verification.
#
# This option is ignored if ``https_ca_certificates_file`` is set.
# The remote server certificate will then be verified using the file
# specified using the ``https_ca_certificates_file`` option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * https_ca_certificates_file
#
# (boolean value)
#https_insecure = true
#
# The http/https proxy information to be used to connect to the remote
# server.
#
# This configuration option specifies the http/https proxy information
# that should be used to connect to the remote server. The proxy
# information should be a key value pair of the scheme and proxy, for
# example, http:10.0.0.1:3128. You can also specify proxies for multiple
# schemes by separating the key value pairs with a comma, for example,
# http:10.0.0.1:3128, https:10.0.0.1:1080.
#
# Possible values:
# * A comma separated list of scheme:proxy pairs as described above
#
# Related options:
# * None
#
# (dict value)
#http_proxy_information =
#
# Size, in megabytes, to chunk RADOS images into.
#
# Provide an integer value representing the size in megabytes to chunk
# Glance images into. The default chunk size is 8 megabytes. For optimal
# performance, the value should be a power of two.
#
# When Ceph's RBD object storage system is used as the storage backend
# for storing Glance images, the images are chunked into objects of the
# size set using this option. These chunked objects are then stored
# across the distributed block data store to use for Glance.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#rbd_store_chunk_size = 8
#
# RADOS pool in which images are stored.
#
# When RBD is used as the storage backend for storing Glance images, the
# images are stored by means of logical grouping of the objects (chunks
# of images) into a ``pool``. Each pool is defined with the number of
# placement groups it can contain. The default pool that is used is
# 'images'.
#
# More information on the RBD storage backend can be found here:
# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
#
# Possible Values:
# * A valid pool name
#
# Related options:
# * None
#
# (string value)
#rbd_store_pool = images
#
# RADOS user to authenticate as.
#
# This configuration option takes in the RADOS user to authenticate as.
# This is only needed when RADOS authentication is enabled and is
# applicable only if the user is using Cephx authentication. If the
# value for this option is not set by the user or is set to None, a
# default value will be chosen, which will be based on the client.
# section in rbd_store_ceph_conf.
#
# Possible Values:
# * A valid RADOS user
#
# Related options:
# * rbd_store_ceph_conf
#
# (string value)
#rbd_store_user = <None>
#
# Ceph configuration file path.
#
# This configuration option takes in the path to the Ceph configuration
# file to be used. If the value for this option is not set by the user
# or is set to None, librados will locate the default configuration file
# which is located at /etc/ceph/ceph.conf. If using Cephx
# authentication, this file should include a reference to the right
# keyring in a client.<USER> section
#
# Possible Values:
# * A valid path to a configuration file
#
# Related options:
# * rbd_store_user
#
# (string value)
#rbd_store_ceph_conf = /etc/ceph/ceph.conf
#
# Timeout value for connecting to Ceph cluster.
#
# This configuration option takes in the timeout value in seconds used
# when connecting to the Ceph cluster i.e. it sets the time to wait for
# glance-api before closing the connection. This prevents glance-api
# hangups during the connection to RBD. If the value for this option
# is set to less than or equal to 0, no timeout is set and the default
# librados value is used.
#
# Possible Values:
# * Any integer value
#
# Related options:
# * None
#
# (integer value)
#rados_connect_timeout = 0
#
# Chunk size for images to be stored in Sheepdog data store.
#
# Provide an integer value representing the size in mebibyte
# (1048576 bytes) to chunk Glance images into. The default
# chunk size is 64 mebibytes.
#
# When using Sheepdog distributed storage system, the images are
# chunked into objects of this size and then stored across the
# distributed data store to use for Glance.
#
# Chunk sizes, if a power of two, help avoid fragmentation and
# enable improved performance.
#
# Possible values:
# * Positive integer value representing size in mebibytes.
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 1
#sheepdog_store_chunk_size = 64
#
# Port number on which the sheep daemon will listen.
#
# Provide an integer value representing a valid port number on
# which you want the Sheepdog daemon to listen on. The default
# port is 7000.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages it receives on
# the port number set using ``sheepdog_store_port`` option to store
# chunks of Glance images.
#
# Possible values:
# * A valid port number (0 to 65535)
#
# Related Options:
# * sheepdog_store_address
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000
#
# Address to bind the Sheepdog daemon to.
#
# Provide a string value representing the address to bind the
# Sheepdog daemon to. The default address set for the 'sheep'
# is 127.0.0.1.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages directed to the
# address set using ``sheepdog_store_address`` option to store
# chunks of Glance images.
#
# Possible values:
# * A valid IPv4 address
# * A valid IPv6 address
# * A valid hostname
#
# Related Options:
# * sheepdog_store_port
#
# (string value)
#sheepdog_store_address = 127.0.0.1
#
# Set verification of the server certificate.
#
# This boolean determines whether or not to verify the server
# certificate. If this option is set to True, swiftclient won't check
# for a valid SSL certificate when authenticating. If the option is set
# to False, then the default CA truststore is used for verification.
#
# Possible values:
# * True
# * False
#
# Related options:
# * swift_store_cacert
#
# (boolean value)
#swift_store_auth_insecure = false
#
# Path to the CA bundle file.
#
# This configuration option enables the operator to specify the path to
# a custom Certificate Authority file for SSL verification when
# connecting to Swift.
#
# Possible values:
# * A valid path to a CA file
#
# Related options:
# * swift_store_auth_insecure
#
# (string value)
#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt
#
# The region of Swift endpoint to use by Glance.
#
# Provide a string value representing a Swift region where Glance
# can connect to for image storage. By default, there is no region
# set.
#
# When Glance uses Swift as the storage backend to store images
# for a specific tenant that has multiple endpoints, setting of a
# Swift region with ``swift_store_region`` allows Glance to connect
# to Swift in the specified region as opposed to a single region
# connectivity.
#
# This option can be configured for both single-tenant and
# multi-tenant storage.
#
# NOTE: Setting the region with ``swift_store_region`` is
# tenant-specific and is necessary ``only if`` the tenant has
# multiple endpoints across different regions.
#
# Possible values:
# * A string value representing a valid Swift region.
#
# Related Options:
# * None
#
# (string value)
#swift_store_region = RegionTwo
#
# The URL endpoint to use for Swift backend storage.
#
# Provide a string value representing the URL endpoint to use for
# storing Glance images in Swift store. By default, an endpoint
# is not set and the storage URL returned by ``auth`` is used.
# Setting an endpoint with ``swift_store_endpoint`` overrides the
# storage URL and is used for Glance image storage.
#
# NOTE: The URL should include the path up to, but excluding the
# container. The location of an object is obtained by appending
# the container and object to the configured URL.
#
# Possible values:
# * String value representing a valid URL path up to a Swift container
#
# Related Options:
# * None
#
# (string value)
#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name
#
# Endpoint Type of Swift service.
#
# This string value indicates the endpoint type to use to fetch the
# Swift endpoint. The endpoint type determines the actions the user will
# be allowed to perform, for instance, reading and writing to the Store.
# This setting is only used if swift_store_auth_version is greater than
# 1.
#
# Possible values:
# * publicURL
# * adminURL
# * internalURL
#
# Related options:
# * swift_store_endpoint
#
# (string value)
# Allowed values: publicURL, adminURL, internalURL
#swift_store_endpoint_type = publicURL
#
# Type of Swift service to use.
#
# Provide a string value representing the service type to use for
# storing images while using Swift backend storage. The default
# service type is set to ``object-store``.
#
# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
# this configuration option needs to be ``object-store``. If using
# a higher version of Keystone or a different auth scheme, this
# option may be modified.
#
# Possible values:
# * A string representing a valid service type for Swift storage.
#
# Related Options:
# * None
#
# (string value)
#swift_store_service_type = object-store
#
# Name of single container to store images/name prefix for multiple containers
#
# When a single container is being used to store images, this configuration
# option indicates the container within the Glance account to be used for
# storing all images. When multiple containers are used to store images, this
# will be the name prefix for all containers. Usage of single/multiple
# containers can be controlled using the configuration option
# ``swift_store_multiple_containers_seed``.
#
# When using multiple containers, the containers will be named after the value
# set for this configuration option with the first N chars of the image UUID
# as the suffix delimited by an underscore (where N is specified by
# ``swift_store_multiple_containers_seed``).
#
# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
# the container ``glance_fda``. All dashes in the UUID are included when
# creating the container name but do not count toward the character limit, so
# when N=10 the container name would be ``glance_fdae39a1-ba.``
#
# Possible values:
# * If using single container, this configuration option can be any string
# that is a valid swift container name in Glance's Swift account
# * If using multiple containers, this configuration option can be any
# string as long as it satisfies the container naming rules enforced by
# Swift. The value of ``swift_store_multiple_containers_seed`` should be
# taken into account as well.
#
# Related options:
# * ``swift_store_multiple_containers_seed``
# * ``swift_store_multi_tenant``
# * ``swift_store_create_container_on_put``
#
# (string value)
#swift_store_container = glance
#
# The size threshold, in MB, after which Glance will start segmenting image
# data.
#
# Swift has an upper limit on the size of a single uploaded object. By default,
# this is 5GB. To upload objects bigger than this limit, objects are segmented
# into multiple smaller objects that are tied together with a manifest file.
# For more detail, refer to
# http://docs.openstack.org/developer/swift/overview_large_objects.html
#
# This configuration option specifies the size threshold over which the Swift
# driver will start segmenting image data into multiple smaller files.
# Currently, the Swift driver only supports creating Dynamic Large Objects.
#
# NOTE: This should be set by taking into account the large object limit
# enforced by the Swift cluster in consideration.
#
# Possible values:
# * A positive integer that is less than or equal to the large object limit
# enforced by the Swift cluster in consideration.
#
# Related options:
# * ``swift_store_large_object_chunk_size``
#
# (integer value)
# Minimum value: 1
#swift_store_large_object_size = 5120
#
# The maximum size, in MB, of the segments when image data is segmented.
#
# When image data is segmented to upload images that are larger than the limit
# enforced by the Swift cluster, image data is broken into segments that are no
# bigger than the size specified by this configuration option.
# Refer to ``swift_store_large_object_size`` for more detail.
#
# For example: if ``swift_store_large_object_size`` is 5GB and
# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
# segmented into 7 segments where the first six segments will be 1GB in size and
# the seventh segment will be 0.2GB.
#
# Possible values:
# * A positive integer that is less than or equal to the large object limit
# enforced by Swift cluster in consideration.
#
# Related options:
# * ``swift_store_large_object_size``
#
# (integer value)
# Minimum value: 1
#swift_store_large_object_chunk_size = 200
#
# Create container, if it doesn't already exist, when uploading image.
#
# At the time of uploading an image, if the corresponding container doesn't
# exist, it will be created provided this configuration option is set to True.
# By default, it won't be created. This behavior is applicable for both single
# and multiple containers mode.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#swift_store_create_container_on_put = false
#
# Store images in tenant's Swift account.
#
# This enables multi-tenant storage mode which causes Glance images to be stored
# in tenant specific Swift accounts. If this is disabled, Glance stores all
# images in its own account. More details multi-tenant store can be found at
# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#swift_store_multi_tenant = false
#
# Seed indicating the number of containers to use for storing images.
#
# When using a single-tenant store, images can be stored in one or more than one
# containers. When set to 0, all images will be stored in one single container.
# When set to an integer value between 1 and 32, multiple containers will be
# used to store images. This configuration option will determine how many
# containers are created. The total number of containers that will be used is
# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
# will be used to store images.
#
# Please refer to ``swift_store_container`` for more detail on the naming
# convention. More detail about using multiple containers can be found at
# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
# multiple-containers.html
#
# NOTE: This is used only when swift_store_multi_tenant is disabled.
#
# Possible values:
# * A non-negative integer less than or equal to 32
#
# Related options:
# * ``swift_store_container``
# * ``swift_store_multi_tenant``
# * ``swift_store_create_container_on_put``
#
# (integer value)
# Minimum value: 0
# Maximum value: 32
#swift_store_multiple_containers_seed = 0
#
# List of tenants that will be granted admin access.
#
# This is a list of tenants that will be granted read/write access on
# all Swift containers created by Glance in multi-tenant mode. The
# default value is an empty list.
#
# Possible values:
# * A comma separated list of strings representing UUIDs of Keystone
# projects/tenants
#
# Related options:
# * None
#
# (list value)
#swift_store_admin_tenants =
#
# SSL layer compression for HTTPS Swift requests.
#
# Provide a boolean value to determine whether or not to compress
# HTTPS Swift requests for images at the SSL layer. By default,
# compression is enabled.
#
# When using Swift as the backend store for Glance image storage,
# SSL layer compression of HTTPS Swift requests can be set using
# this option. If set to False, SSL layer compression of HTTPS
# Swift requests is disabled. Disabling this option may improve
# performance for images which are already in a compressed format,
# for example, qcow2.
#
# Possible values:
# * True
# * False
#
# Related Options:
# * None
#
# (boolean value)
#swift_store_ssl_compression = true
#
# The number of times a Swift download will be retried before the
# request fails.
#
# Provide an integer value representing the number of times an image
# download must be retried before erroring out. The default value is
# zero (no retry on a failed image download). When set to a positive
# integer value, ``swift_store_retry_get_count`` ensures that the
# download is attempted this many more times upon a download failure
# before sending an error message.
#
# Possible values:
# * Zero
# * Positive integer value
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#swift_store_retry_get_count = 0
#
# Time in seconds defining the size of the window in which a new
# token may be requested before the current token is due to expire.
#
# Typically, the Swift storage driver fetches a new token upon the
# expiration of the current token to ensure continued access to
# Swift. However, some Swift transactions (like uploading image
# segments) may not recover well if the token expires on the fly.
#
# Hence, by fetching a new token before the current token expiration,
# we make sure that the token does not expire or is close to expiry
# before a transaction is attempted. By default, the Swift storage
# driver requests for a new token 60 seconds or less before the
# current token expiration.
#
# Possible values:
# * Zero
# * Positive integer value
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#swift_store_expire_soon_interval = 60
#
# Use trusts for multi-tenant Swift store.
#
# This option instructs the Swift store to create a trust for each
# add/get request when the multi-tenant store is in use. Using trusts
# allows the Swift store to avoid problems that can be caused by an
# authentication token expiring during the upload or download of data.
#
# By default, ``swift_store_use_trusts`` is set to ``True``(use of
# trusts is enabled). If set to ``False``, a user token is used for
# the Swift connection instead, eliminating the overhead of trust
# creation.
#
# NOTE: This option is considered only when
# ``swift_store_multi_tenant`` is set to ``True``
#
# Possible values:
# * True
# * False
#
# Related options:
# * swift_store_multi_tenant
#
# (boolean value)
#swift_store_use_trusts = true
#
# Reference to default Swift account/backing store parameters.
#
# Provide a string value representing a reference to the default set
# of parameters required for using swift account/backing store for
# image storage. The default reference value for this configuration
# option is 'ref1'. This configuration option dereferences the
# parameters and facilitates image storage in Swift storage backend
# every time a new image is added.
#
# Possible values:
# * A valid string value
#
# Related options:
# * None
#
# (string value)
#default_swift_reference = ref1
# DEPRECATED: Version of the authentication service to use. Valid versions are 2
# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_version' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_version = 2
# DEPRECATED: The address where the Swift authentication service is listening.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_address' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_address = <None>
# DEPRECATED: The user to authenticate against the Swift authentication service.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'user' in the Swift back-end configuration file is set instead.
#swift_store_user = <None>
# DEPRECATED: Auth key for the user authenticating against the Swift
# authentication service. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'key' in the Swift back-end configuration file is used
# to set the authentication key instead.
#swift_store_key = <None>
#
# Absolute path to the file containing the swift account(s)
# configurations.
#
# Include a string value representing the path to a configuration
# file that has references for each of the configured Swift
# account(s)/backing stores. By default, no file path is specified
# and customized Swift referencing is disabled. Configuring this
# option is highly recommended while using Swift storage backend for
# image storage as it avoids storage of credentials in the database.
#
# Possible values:
# * String value representing an absolute path on the glance-api
# node
#
# Related options:
# * None
#
# (string value)
#swift_store_config_file = <None>
#
# Address of the ESX/ESXi or vCenter Server target system.
#
# This configuration option sets the address of the ESX/ESXi or vCenter
# Server target system. This option is required when using the VMware
# storage backend. The address can contain an IP address (127.0.0.1) or
# a DNS name (www.my-domain.com).
#
# Possible Values:
# * A valid IPv4 or IPv6 address
# * A valid DNS name
#
# Related options:
# * vmware_server_username
# * vmware_server_password
#
# (string value)
#vmware_server_host = 127.0.0.1
#
# Server username.
#
# This configuration option takes the username for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
# * Any string that is the username for a user with appropriate
# privileges
#
# Related options:
# * vmware_server_host
# * vmware_server_password
#
# (string value)
#vmware_server_username = root
#
# Server password.
#
# This configuration option takes the password for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
# * Any string that is a password corresponding to the username
# specified using the "vmware_server_username" option
#
# Related options:
# * vmware_server_host
# * vmware_server_username
#
# (string value)
#vmware_server_password = vmware
#
# The number of VMware API retries.
#
# This configuration option specifies the number of times the VMware
# ESX/VC server API must be retried upon connection related issues or
# server API call overload. It is not possible to specify 'retry
# forever'.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#vmware_api_retry_count = 10
#
# Interval in seconds used for polling remote tasks invoked on VMware
# ESX/VC server.
#
# This configuration option takes in the sleep time in seconds for polling an
# on-going async task as part of the VMWare ESX/VC server API call.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#vmware_task_poll_interval = 5
#
# The directory where the glance images will be stored in the datastore.
#
# This configuration option specifies the path to the directory where the
# glance images will be stored in the VMware datastore. If this option
# is not set, the default directory where the glance images are stored
# is openstack_glance.
#
# Possible Values:
# * Any string that is a valid path to a directory
#
# Related options:
# * None
#
# (string value)
#vmware_store_image_dir = /openstack_glance
#
# Set verification of the ESX/vCenter server certificate.
#
# This configuration option takes a boolean value to determine
# whether or not to verify the ESX/vCenter server certificate. If this
# option is set to True, the ESX/vCenter server certificate is not
# verified. If this option is set to False, then the default CA
# truststore is used for verification.
#
# This option is ignored if the "vmware_ca_file" option is set. In that
# case, the ESX/vCenter server certificate will then be verified using
# the file specified using the "vmware_ca_file" option .
#
# Possible Values:
# * True
# * False
#
# Related options:
# * vmware_ca_file
#
# (boolean value)
# Deprecated group/name - [glance_store]/vmware_api_insecure
#vmware_insecure = false
#
# Absolute path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Cerificate Authority File to verify the ESX/vCenter certificate.
#
# If this option is set, the "vmware_insecure" option will be ignored
# and the CA file specified will be used to authenticate the ESX/vCenter
# server certificate and establish a secure connection to the server.
#
# Possible Values:
# * Any string that is a valid absolute path to a CA file
#
# Related options:
# * vmware_insecure
#
# (string value)
#vmware_ca_file = /etc/ssl/certs/ca-certificates.crt
#
# The datastores where the image can be stored.
#
# This configuration option specifies the datastores where the image can
# be stored in the VMWare store backend. This option may be specified
# multiple times for specifying multiple datastores. The datastore name
# should be specified after its datacenter path, separated by ":". An
# optional weight may be given after the datastore name, separated again
# by ":" to specify the priority. Thus, the required format becomes
# <datacenter_path>:<datastore_name>:<optional_weight>.
#
# When adding an image, the datastore with highest weight will be
# selected, unless there is not enough free space available in cases
# where the image size is already known. If no weight is given, it is
# assumed to be zero and the directory will be considered for selection
# last. If multiple datastores have the same weight, then the one with
# the most free space available is selected.
#
# Possible Values:
# * Any string of the format:
# <datacenter_path>:<datastore_name>:<optional_weight>
#
# Related options:
# * None
#
# (multi valued)
#vmware_datastores =
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
Configuration options for the glance database management tool
are found in the glance-manage.conf
file.
Note
Options set in glance-manage.conf
will override options of the same
section and name set in glance-registry.conf
and glance-api.conf
.
Similarly, options in glance-api.conf
will override options set in
glance-registry.conf
.
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[database]
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
#
# From oslo.db.concurrency
#
# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false
Configuration for the Image service’s registry, which stores the metadata about
images, is found in the glance-registry.conf
file.
This file must be modified after installation.
[DEFAULT]
#
# From glance.registry
#
#
# Set the image owner to tenant or the authenticated user.
#
# Assign a boolean value to determine the owner of an image. When set to
# True, the owner of the image is the tenant. When set to False, the
# owner of the image will be the authenticated user issuing the request.
# Setting it to False makes the image private to the associated user and
# sharing with other users within the same tenant (or "project")
# requires explicit image sharing via image membership.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#owner_is_tenant = true
#
# Role used to identify an authenticated user as administrator.
#
# Provide a string value representing a Keystone role to identify an
# administrative user. Users with this role will be granted
# administrative privileges. The default value for this option is
# 'admin'.
#
# Possible values:
# * A string value which is a valid Keystone role
#
# Related options:
# * None
#
# (string value)
#admin_role = admin
#
# Allow limited access to unauthenticated users.
#
# Assign a boolean to determine API access for unathenticated
# users. When set to False, the API cannot be accessed by
# unauthenticated users. When set to True, unauthenticated users can
# access the API with read-only privileges. This however only applies
# when using ContextMiddleware.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#allow_anonymous_access = false
#
# Limit the request ID length.
#
# Provide an integer value to limit the length of the request ID to
# the specified length. The default value is 64. Users can change this
# to any ineteger value between 0 and 16384 however keeping in mind that
# a larger value may flood the logs.
#
# Possible values:
# * Integer value between 0 and 16384
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#max_request_id_length = 64
#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * image_property_quota
#
# (boolean value)
#allow_additional_image_properties = true
#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_member_quota = 128
#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
# * ``allow_additional_image_properties``
#
# (integer value)
#image_property_quota = 128
#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_tag_quota = 128
#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_location_quota = 10
#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
# * glance.db.sqlalchemy.api
# * glance.db.registry.api
# * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
# * enable_v2_api
# * enable_v2_registry
#
# (string value)
#data_api = glance.db.sqlalchemy.api
#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
# * The value of this configuration option may not be greater than
# the value specified by ``api_limit_max``.
# * Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * api_limit_max
#
# (integer value)
# Minimum value: 1
#limit_param_default = 25
#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * limit_param_default
#
# (integer value)
# Minimum value: 1
#api_limit_max = 1000
#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_multiple_locations
# * location_strategy
#
# (boolean value)
#show_image_direct_url = false
# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_image_direct_url
# * location_strategy
#
# (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false
#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
# * This value should only be increased after careful
# consideration and must be set less than or equal to
# 8 EiB (9223372036854775808).
# * This value must be set with careful consideration of the
# backend storage capacity. Setting this to a very low value
# may result in a large number of image failures. And, setting
# this to a very large value may result in faster consumption
# of storage. Hence, this must be set according to the nature of
# images created and storage capacity available.
#
# Possible values:
# * Any positive number less than or equal to 9223372036854775808
#
# (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776
#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
# * A string that is a valid concatenation of a non-negative integer
# representing the storage value and an optional string literal
# representing storage units as mentioned above.
#
# Related options:
# * None
#
# (string value)
#user_storage_quota = 0
#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
# * If this option is enabled, then ``enable_v1_registry`` must
# also be set to ``True`` to enable mandatory usage of Registry
# service with v1 API.
#
# * If this option is disabled, then the ``enable_v1_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v2_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v2 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_registry
# * enable_v2_api
#
# (boolean value)
#enable_v1_api = true
#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
# * If this option is disabled, then the ``enable_v2_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v1_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v1 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_registry
# * enable_v1_api
#
# (boolean value)
#enable_v2_api = true
#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
# * Use of Registry is mandatory in v1 API, so this option must
# be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
# * If deploying only the v2 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_api
#
# (boolean value)
#enable_v1_registry = true
#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
# * Use of Registry is optional in v2 API, so this option
# must only be enabled if both ``enable_v2_api`` is set to
# ``True`` and the ``data_api`` option is set to
# ``glance.db.registry.api``.
#
# * If deploying only the v1 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_api
# * data_api
#
# (boolean value)
#enable_v2_registry = true
#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
# * Valid hostname
# * Valid IP address
#
# Related options:
# * None
#
# (string value)
#pydev_worker_debug_host = localhost
#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
# * A valid port number
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678
#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
# * String value representing a valid AES key
#
# Related options:
# * None
#
# (string value)
#metadata_encryption_key = <None>
#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
# * An OpenSSL message digest algorithm identifier
#
# Relation options:
# * None
#
# (string value)
#digest_algorithm = sha256
#
# IP address to bind the glance servers to.
#
# Provide an IP address to bind the glance server to. The default
# value is ``0.0.0.0``.
#
# Edit this option to enable the server to listen on one particular
# IP address on the network card. This facilitates selection of a
# particular network interface for the server.
#
# Possible values:
# * A valid IPv4 address
# * A valid IPv6 address
#
# Related options:
# * None
#
# (string value)
#bind_host = 0.0.0.0
#
# Port number on which the server will listen.
#
# Provide a valid port number to bind the server's socket to. This
# port is then set to identify processes and forward network messages
# that arrive at the server. The default bind_port value for the API
# server is 9292 and for the registry server is 9191.
#
# Possible values:
# * A valid port number (0 to 65535)
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#bind_port = <None>
#
# Set the number of incoming connection requests.
#
# Provide a positive integer value to limit the number of requests in
# the backlog queue. The default queue size is 4096.
#
# An incoming connection to a TCP listener socket is queued before a
# connection can be established with the server. Setting the backlog
# for a TCP socket ensures a limited queue size for incoming traffic.
#
# Possible values:
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#backlog = 4096
#
# Set the wait time before a connection recheck.
#
# Provide a positive integer value representing time in seconds which
# is set as the idle wait time before a TCP keep alive packet can be
# sent to the host. The default value is 600 seconds.
#
# Setting ``tcp_keepidle`` helps verify at regular intervals that a
# connection is intact and prevents frequent TCP connection
# reestablishment.
#
# Possible values:
# * Positive integer value representing time in seconds
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#tcp_keepidle = 600
#
# Absolute path to the CA file.
#
# Provide a string value representing a valid absolute path to
# the Certificate Authority file to use for client authentication.
#
# A CA file typically contains necessary trusted certificates to
# use for the client authentication. This is essential to ensure
# that a secure connection is established to the server via the
# internet.
#
# Possible values:
# * Valid absolute path to the CA file
#
# Related options:
# * None
#
# (string value)
#ca_file = /etc/ssl/cafile
#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file which is required to start the API service
# securely.
#
# A certificate file typically is a public key container and includes
# the server's public key, server name, server information and the
# signature which was a result of the verification process using the
# CA certificate. This is required for a secure connection
# establishment.
#
# Possible values:
# * Valid absolute path to the certificate file
#
# Related options:
# * None
#
# (string value)
#cert_file = /etc/ssl/certs
#
# Absolute path to a private key file.
#
# Provide a string value representing a valid absolute path to a
# private key file which is required to establish the client-server
# connection.
#
# Possible values:
# * Absolute path to the private key file
#
# Related options:
# * None
#
# (string value)
#key_file = /etc/ssl/key/key-file.pem
# DEPRECATED: The HTTP header used to determine the scheme for the original
# request, even if it was removed by an SSL terminating proxy. Typical value is
# "HTTP_X_FORWARDED_PROTO". (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Use the http_proxy_to_wsgi middleware instead.
#secure_proxy_ssl_header = <None>
#
# Number of Glance worker processes to start.
#
# Provide a non-negative integer value to set the number of child
# process workers to service requests. By default, the number of CPUs
# available is set as the value for ``workers``.
#
# Each worker process is made to listen on the port set in the
# configuration file and contains a greenthread pool of size 1000.
#
# NOTE: Setting the number of workers to zero, triggers the creation
# of a single API process with a greenthread pool of size 1000.
#
# Possible values:
# * 0
# * Positive integer value (typically equal to the number of CPUs)
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#workers = <None>
#
# Maximum line size of message headers.
#
# Provide an integer value representing a length to limit the size of
# message headers. The default value is 16384.
#
# NOTE: ``max_header_line`` may need to be increased when using large
# tokens (typically those generated by the Keystone v3 API with big
# service catalogs). However, it is to be kept in mind that larger
# values for ``max_header_line`` would flood the logs.
#
# Setting ``max_header_line`` to 0 sets no limit for the line size of
# message headers.
#
# Possible values:
# * 0
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#max_header_line = 16384
#
# Set keep alive option for HTTP over TCP.
#
# Provide a boolean value to determine sending of keep alive packets.
# If set to ``False``, the server returns the header
# "Connection: close". If set to ``True``, the server returns a
# "Connection: Keep-Alive" in its responses. This enables retention of
# the same TCP connection for HTTP conversations instead of opening a
# new one with each new request.
#
# This option must be set to ``False`` if the client socket connection
# needs to be closed explicitly after the response is received and
# read successfully by the client.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#http_keepalive = true
#
# Timeout for client connections' socket operations.
#
# Provide a valid integer value representing time in seconds to set
# the period of wait before an incoming connection can be closed. The
# default value is 900 seconds.
#
# The value zero implies wait forever.
#
# Possible values:
# * Zero
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#client_socket_timeout = 900
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
#
# From oslo.messaging
#
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2
# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>
# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack
[database]
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
#
# From oslo.db.concurrency
#
# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users. Unauthenticated
# clients are redirected to this endpoint to authenticate. Although this
# endpoint should ideally be unversioned, client support in the wild varies.
# If you're using a versioned v2 endpoint here, then this should *not* be the
# same endpoint the service user utilizes for validating tokens, because normal
# end users may not be able to reach that endpoint. (string value)
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity API
# Server. (integer value)
#http_request_max_retries = 3
# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set to
# -1 to disable caching completely. (integer value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is retrieved from
# the Identity service (in seconds). A high number of revocation events combined
# with a low cache duration may significantly reduce performance. Only valid for
# PKI tokens. (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached server.
# (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it if
# not. "strict" like "permissive" but if the bind type is unknown the token will
# be rejected. "required" any form of token binding is needed to be allowed.
# Finally the name of a binding method that must be present in tokens. (string
# value)
#enforce_token_bind = permissive
# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false
# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
[matchmaker_redis]
#
# From oslo.messaging
#
# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1
# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379
# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =
# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1
# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2
# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30
# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10
# The deadline for an rpc reply message delivery. Only used when caller does not
# provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30
# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30
# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30
# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy' - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic' - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc
# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify
# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast
# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast
# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-robin
# fashion across consumers. (string value)
#anycast_address = anycast
# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>
# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>
# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200
# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100
# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100
[oslo_messaging_notifications]
#
# From oslo.messaging
#
# The Drivers(s) to handle sending notifications. Possible values are messaging,
# messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>
# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than one
# RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost
# DEPRECATED: The RabbitMQ broker port where a single node is used. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672
# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest
# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30
# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0
# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue. If
# you just want to make sure that all queues (except those with auto-generated
# names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
# '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically deleted.
# The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800
# Specifies the number of messages to prefetch. Setting to zero allows unlimited
# messages. (integer value)
#rabbit_qos_prefetch_count = 0
# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
# How often times during the heartbeat_timeout_threshold we check the heartbeat.
# (integer value)
#heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
# Maximum number of channels to allow (integer value)
#channel_max = <None>
# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>
# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3
# Enable SSL (boolean value)
#ssl = <None>
# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>
# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25
# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value)
#tcp_user_timeout = 0.25
# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25
# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single
# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30
# Maximum number of connections to create above `pool_max_size`. (integer value)
#pool_max_overflow = 0
# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30
# Lifetime of a connection (since creation) in seconds or None for no recycling.
# Expired connections are closed on acquire. (integer value)
#pool_recycle = 600
# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60
# Persist notification messages. (boolean value)
#notification_persistence = false
# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification
# Max number of not acknowledged message which RabbitMQ can send to notification
# listener. (integer value)
#notification_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25
# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60
# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc
# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply
# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100
# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending reply.
# -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending reply.
# (floating point value)
#rpc_reply_retry_delay = 0.25
# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25
[oslo_messaging_zmq]
#
# From oslo.messaging
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1 specifies
# an infinite linger period. The value of 0 specifies no linger period. Pending
# messages shall be discarded immediately when the socket is closed. Only
# supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target (
# < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
[paste_deploy]
#
# From glance.registry
#
#
# Deployment flavor to use in the server application pipeline.
#
# Provide a string value representing the appropriate deployment
# flavor used in the server application pipleline. This is typically
# the partial name of a pipeline in the paste configuration file with
# the service name removed.
#
# For example, if your paste section name in the paste configuration
# file is [pipeline:glance-api-keystone], set ``flavor`` to
# ``keystone``.
#
# Possible values:
# * String value representing a partial pipeline name.
#
# Related Options:
# * config_file
#
# (string value)
#flavor = keystone
#
# Name of the paste configuration file.
#
# Provide a string value representing the name of the paste
# configuration file to use for configuring piplelines for
# server application deployments.
#
# NOTES:
# * Provide the name or the path relative to the glance directory
# for the paste configuration file and not the absolute path.
# * The sample paste configuration file shipped with Glance need
# not be edited in most cases as it comes with ready-made
# pipelines for all common deployment flavors.
#
# If no value is specified for this option, the ``paste.ini`` file
# with the prefix of the corresponding Glance service's configuration
# file name will be searched for in the known configuration
# directories. (For example, if this option is missing from or has no
# value set in ``glance-api.conf``, the service will look for a file
# named ``glance-api-paste.ini``.) If the paste configuration file is
# not found, the service will not start.
#
# Possible values:
# * A string value representing the name of the paste configuration
# file.
#
# Related Options:
# * flavor
#
# (string value)
#config_file = glance-api-paste.ini
[profiler]
#
# From glance.registry
#
#
# Enables the profiling for all services on this node. Default value is False
# (fully disable the profiling feature).
#
# Possible values:
#
# * True: Enables the feature
# * False: Disables the feature. The profiling cannot be started via this
# project
# operations. If the profiling is triggered by another project, this project
# part
# will be empty.
# (boolean value)
# Deprecated group/name - [profiler]/profiler_enabled
#enabled = false
#
# Enables SQL requests profiling in services. Default value is False (SQL
# requests won't be traced).
#
# Possible values:
#
# * True: Enables SQL requests profiling. Each SQL query will be part of the
# trace and can the be analyzed by how much time was spent for that.
# * False: Disables SQL requests profiling. The spent time is only shown on a
# higher level of operations. Single SQL queries cannot be analyzed this
# way.
# (boolean value)
#trace_sqlalchemy = false
#
# Secret key(s) to use for encrypting context data for performance profiling.
# This string value should have the following format: <key1>[,<key2>,...<keyn>],
# where each key is some random string. A user who triggers the profiling via
# the REST API has to set one of these keys in the headers of the REST API call
# to include profiling results of this node for this particular project.
#
# Both "enabled" flag and "hmac_keys" config options should be set to enable
# profiling. Also, to generate correct profiling information across all services
# at least one key needs to be consistent between OpenStack projects. This
# ensures it can be used from client side to generate the trace, containing
# information from all possible resources. (string value)
#hmac_keys = SECRET_KEY
#
# Connection string for a notifier backend. Default value is messaging:// which
# sets the notifier to oslo_messaging.
#
# Examples of possible values:
#
# * messaging://: use oslo_messaging driver for sending notifications.
# (string value)
#connection_string = messaging://
The Image service’s middleware pipeline for its registry is found in the
glance-registry-paste.ini
file.
# Use this pipeline for no auth - DEFAULT
[pipeline:glance-registry]
pipeline = healthcheck osprofiler unauthenticated-context registryapp
# Use this pipeline for keystone auth
[pipeline:glance-registry-keystone]
pipeline = healthcheck osprofiler authtoken context registryapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-registry-trusted-auth]
pipeline = healthcheck osprofiler context registryapp
[app:registryapp]
paste.app_factory = glance.registry.api:API.factory
[filter:healthcheck]
paste.filter_factory = oslo_middleware:Healthcheck.factory
backends = disable_by_file
disable_by_file_path = /etc/glance/healthcheck_disable
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
hmac_keys = SECRET_KEY #DEPRECATED
enabled = yes #DEPRECATED
glance-scrubber
is a utility for the Image service that cleans up images that have been
deleted. Its configuration is stored in the glance-scrubber.conf
file.
[DEFAULT]
#
# From glance.scrubber
#
#
# Allow users to add additional/custom properties to images.
#
# Glance defines a standard set of properties (in its schema) that
# appear on every image. These properties are also known as
# ``base properties``. In addition to these properties, Glance
# allows users to add custom properties to images. These are known
# as ``additional properties``.
#
# By default, this configuration option is set to ``True`` and users
# are allowed to add additional properties. The number of additional
# properties that can be added to an image can be controlled via
# ``image_property_quota`` configuration option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * image_property_quota
#
# (boolean value)
#allow_additional_image_properties = true
#
# Maximum number of image members per image.
#
# This limits the maximum of users an image can be shared with. Any negative
# value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_member_quota = 128
#
# Maximum number of properties allowed on an image.
#
# This enforces an upper limit on the number of additional properties an image
# can have. Any negative value is interpreted as unlimited.
#
# NOTE: This won't have any impact if additional properties are disabled. Please
# refer to ``allow_additional_image_properties``.
#
# Related options:
# * ``allow_additional_image_properties``
#
# (integer value)
#image_property_quota = 128
#
# Maximum number of tags allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_tag_quota = 128
#
# Maximum number of locations allowed on an image.
#
# Any negative value is interpreted as unlimited.
#
# Related options:
# * None
#
# (integer value)
#image_location_quota = 10
#
# Python module path of data access API.
#
# Specifies the path to the API to use for accessing the data model.
# This option determines how the image catalog data will be accessed.
#
# Possible values:
# * glance.db.sqlalchemy.api
# * glance.db.registry.api
# * glance.db.simple.api
#
# If this option is set to ``glance.db.sqlalchemy.api`` then the image
# catalog data is stored in and read from the database via the
# SQLAlchemy Core and ORM APIs.
#
# Setting this option to ``glance.db.registry.api`` will force all
# database access requests to be routed through the Registry service.
# This avoids data access from the Glance API nodes for an added layer
# of security, scalability and manageability.
#
# NOTE: In v2 OpenStack Images API, the registry service is optional.
# In order to use the Registry API in v2, the option
# ``enable_v2_registry`` must be set to ``True``.
#
# Finally, when this configuration option is set to
# ``glance.db.simple.api``, image catalog data is stored in and read
# from an in-memory data structure. This is primarily used for testing.
#
# Related options:
# * enable_v2_api
# * enable_v2_registry
#
# (string value)
#data_api = glance.db.sqlalchemy.api
#
# The default number of results to return for a request.
#
# Responses to certain API requests, like list images, may return
# multiple items. The number of results returned can be explicitly
# controlled by specifying the ``limit`` parameter in the API request.
# However, if a ``limit`` parameter is not specified, this
# configuration value will be used as the default number of results to
# be returned for any API request.
#
# NOTES:
# * The value of this configuration option may not be greater than
# the value specified by ``api_limit_max``.
# * Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * api_limit_max
#
# (integer value)
# Minimum value: 1
#limit_param_default = 25
#
# Maximum number of results that could be returned by a request.
#
# As described in the help text of ``limit_param_default``, some
# requests may return multiple results. The number of results to be
# returned are governed either by the ``limit`` parameter in the
# request or the ``limit_param_default`` configuration option.
# The value in either case, can't be greater than the absolute maximum
# defined by this configuration option. Anything greater than this
# value is trimmed down to the maximum value defined here.
#
# NOTE: Setting this to a very large value may slow down database
# queries and increase response times. Setting this to a
# very low value may result in poor user experience.
#
# Possible values:
# * Any positive integer
#
# Related options:
# * limit_param_default
#
# (integer value)
# Minimum value: 1
#api_limit_max = 1000
#
# Show direct image location when returning an image.
#
# This configuration option indicates whether to show the direct image
# location when returning image details to the user. The direct image
# location is where the image data is stored in backend storage. This
# image location is shown under the image property ``direct_url``.
#
# When multiple image locations exist for an image, the best location
# is displayed based on the location strategy indicated by the
# configuration option ``location_strategy``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_multiple_locations`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_multiple_locations
# * location_strategy
#
# (boolean value)
#show_image_direct_url = false
# DEPRECATED:
# Show all image locations when returning an image.
#
# This configuration option indicates whether to show all the image
# locations when returning image details to the user. When multiple
# image locations exist for an image, the locations are ordered based
# on the location strategy indicated by the configuration opt
# ``location_strategy``. The image locations are shown under the
# image property ``locations``.
#
# NOTES:
# * Revealing image locations can present a GRAVE SECURITY RISK as
# image locations can sometimes include credentials. Hence, this
# is set to ``False`` by default. Set this to ``True`` with
# EXTREME CAUTION and ONLY IF you know what you are doing!
# * If an operator wishes to avoid showing any image location(s)
# to the user, then both this option and
# ``show_image_direct_url`` MUST be set to ``False``.
#
# Possible values:
# * True
# * False
#
# Related options:
# * show_image_direct_url
# * location_strategy
#
# (boolean value)
# This option is deprecated for removal since Newton.
# Its value may be silently ignored in the future.
# Reason: This option will be removed in the Ocata release because the same
# functionality can be achieved with greater granularity by using policies.
# Please see the Newton release notes for more information.
#show_multiple_locations = false
#
# Maximum size of image a user can upload in bytes.
#
# An image upload greater than the size mentioned here would result
# in an image creation failure. This configuration option defaults to
# 1099511627776 bytes (1 TiB).
#
# NOTES:
# * This value should only be increased after careful
# consideration and must be set less than or equal to
# 8 EiB (9223372036854775808).
# * This value must be set with careful consideration of the
# backend storage capacity. Setting this to a very low value
# may result in a large number of image failures. And, setting
# this to a very large value may result in faster consumption
# of storage. Hence, this must be set according to the nature of
# images created and storage capacity available.
#
# Possible values:
# * Any positive number less than or equal to 9223372036854775808
#
# (integer value)
# Minimum value: 1
# Maximum value: 9223372036854775808
#image_size_cap = 1099511627776
#
# Maximum amount of image storage per tenant.
#
# This enforces an upper limit on the cumulative storage consumed by all images
# of a tenant across all stores. This is a per-tenant limit.
#
# The default unit for this configuration option is Bytes. However, storage
# units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
# ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
# TeraBytes respectively. Note that there should not be any space between the
# value and unit. Value ``0`` signifies no quota enforcement. Negative values
# are invalid and result in errors.
#
# Possible values:
# * A string that is a valid concatenation of a non-negative integer
# representing the storage value and an optional string literal
# representing storage units as mentioned above.
#
# Related options:
# * None
#
# (string value)
#user_storage_quota = 0
#
# Deploy the v1 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond to
# requests on registered endpoints conforming to the v1 OpenStack
# Images API.
#
# NOTES:
# * If this option is enabled, then ``enable_v1_registry`` must
# also be set to ``True`` to enable mandatory usage of Registry
# service with v1 API.
#
# * If this option is disabled, then the ``enable_v1_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v2_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v2 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_registry
# * enable_v2_api
#
# (boolean value)
#enable_v1_api = true
#
# Deploy the v2 OpenStack Images API.
#
# When this option is set to ``True``, Glance service will respond
# to requests on registered endpoints conforming to the v2 OpenStack
# Images API.
#
# NOTES:
# * If this option is disabled, then the ``enable_v2_registry``
# option, which is enabled by default, is also recommended
# to be disabled.
#
# * This option is separate from ``enable_v1_api``, both v1 and v2
# OpenStack Images API can be deployed independent of each
# other.
#
# * If deploying only the v1 Images API, this option, which is
# enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_registry
# * enable_v1_api
#
# (boolean value)
#enable_v2_api = true
#
# Deploy the v1 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v1 API requests.
#
# NOTES:
# * Use of Registry is mandatory in v1 API, so this option must
# be set to ``True`` if the ``enable_v1_api`` option is enabled.
#
# * If deploying only the v2 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v1_api
#
# (boolean value)
#enable_v1_registry = true
#
# Deploy the v2 API Registry service.
#
# When this option is set to ``True``, the Registry service
# will be enabled in Glance for v2 API requests.
#
# NOTES:
# * Use of Registry is optional in v2 API, so this option
# must only be enabled if both ``enable_v2_api`` is set to
# ``True`` and the ``data_api`` option is set to
# ``glance.db.registry.api``.
#
# * If deploying only the v1 OpenStack Images API, this option,
# which is enabled by default, should be disabled.
#
# Possible values:
# * True
# * False
#
# Related options:
# * enable_v2_api
# * data_api
#
# (boolean value)
#enable_v2_registry = true
#
# Host address of the pydev server.
#
# Provide a string value representing the hostname or IP of the
# pydev server to use for debugging. The pydev server listens for
# debug connections on this address, facilitating remote debugging
# in Glance.
#
# Possible values:
# * Valid hostname
# * Valid IP address
#
# Related options:
# * None
#
# (string value)
#pydev_worker_debug_host = localhost
#
# Port number that the pydev server will listen on.
#
# Provide a port number to bind the pydev server to. The pydev
# process accepts debug connections on this port and facilitates
# remote debugging in Glance.
#
# Possible values:
# * A valid port number
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#pydev_worker_debug_port = 5678
#
# AES key for encrypting store location metadata.
#
# Provide a string value representing the AES cipher to use for
# encrypting Glance store metadata.
#
# NOTE: The AES key to use must be set to a random string of length
# 16, 24 or 32 bytes.
#
# Possible values:
# * String value representing a valid AES key
#
# Related options:
# * None
#
# (string value)
#metadata_encryption_key = <None>
#
# Digest algorithm to use for digital signature.
#
# Provide a string value representing the digest algorithm to
# use for generating digital signatures. By default, ``sha256``
# is used.
#
# To get a list of the available algorithms supported by the version
# of OpenSSL on your platform, run the command:
# ``openssl list-message-digest-algorithms``.
# Examples are 'sha1', 'sha256', and 'sha512'.
#
# NOTE: ``digest_algorithm`` is not related to Glance's image signing
# and verification. It is only used to sign the universally unique
# identifier (UUID) as a part of the certificate file and key file
# validation.
#
# Possible values:
# * An OpenSSL message digest algorithm identifier
#
# Relation options:
# * None
#
# (string value)
#digest_algorithm = sha256
#
# The amount of time, in seconds, to delay image scrubbing.
#
# When delayed delete is turned on, an image is put into ``pending_delete``
# state upon deletion until the scrubber deletes its image data. Typically, soon
# after the image is put into ``pending_delete`` state, it is available for
# scrubbing. However, scrubbing can be delayed until a later point using this
# configuration option. This option denotes the time period an image spends in
# ``pending_delete`` state before it is available for scrubbing.
#
# It is important to realize that this has storage implications. The larger the
# ``scrub_time``, the longer the time to reclaim backend storage from deleted
# images.
#
# Possible values:
# * Any non-negative integer
#
# Related options:
# * ``delayed_delete``
#
# (integer value)
# Minimum value: 0
#scrub_time = 0
#
# The size of thread pool to be used for scrubbing images.
#
# When there are a large number of images to scrub, it is beneficial to scrub
# images in parallel so that the scrub queue stays in control and the backend
# storage is reclaimed in a timely fashion. This configuration option denotes
# the maximum number of images to be scrubbed in parallel. The default value is
# one, which signifies serial scrubbing. Any value above one indicates parallel
# scrubbing.
#
# Possible values:
# * Any non-zero positive integer
#
# Related options:
# * ``delayed_delete``
#
# (integer value)
# Minimum value: 1
#scrub_pool_size = 1
#
# Turn on/off delayed delete.
#
# Typically when an image is deleted, the ``glance-api`` service puts the image
# into ``deleted`` state and deletes its data at the same time. Delayed delete
# is a feature in Glance that delays the actual deletion of image data until a
# later point in time (as determined by the configuration option
# ``scrub_time``).
# When delayed delete is turned on, the ``glance-api`` service puts the image
# into ``pending_delete`` state upon deletion and leaves the image data in the
# storage backend for the image scrubber to delete at a later time. The image
# scrubber will move the image into ``deleted`` state upon successful deletion
# of image data.
#
# NOTE: When delayed delete is turned on, image scrubber MUST be running as a
# periodic task to prevent the backend storage from filling up with undesired
# usage.
#
# Possible values:
# * True
# * False
#
# Related options:
# * ``scrub_time``
# * ``wakeup_time``
# * ``scrub_pool_size``
#
# (boolean value)
#delayed_delete = false
#
# Role used to identify an authenticated user as administrator.
#
# Provide a string value representing a Keystone role to identify an
# administrative user. Users with this role will be granted
# administrative privileges. The default value for this option is
# 'admin'.
#
# Possible values:
# * A string value which is a valid Keystone role
#
# Related options:
# * None
#
# (string value)
#admin_role = admin
#
# Send headers received from identity when making requests to
# registry.
#
# Typically, Glance registry can be deployed in multiple flavors,
# which may or may not include authentication. For example,
# ``trusted-auth`` is a flavor that does not require the registry
# service to authenticate the requests it receives. However, the
# registry service may still need a user context to be populated to
# serve the requests. This can be achieved by the caller
# (the Glance API usually) passing through the headers it received
# from authenticating with identity for the same request. The typical
# headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,
# ``X-Identity-Status`` and ``X-Service-Catalog``.
#
# Provide a boolean value to determine whether to send the identity
# headers to provide tenant and user information along with the
# requests to registry service. By default, this option is set to
# ``False``, which means that user and tenant information is not
# available readily. It must be obtained by authenticating. Hence, if
# this is set to ``False``, ``flavor`` must be set to value that
# either includes authentication or authenticated user context.
#
# Possible values:
# * True
# * False
#
# Related options:
# * flavor
#
# (boolean value)
#send_identity_headers = false
#
# Time interval, in seconds, between scrubber runs in daemon mode.
#
# Scrubber can be run either as a cron job or daemon. When run as a daemon, this
# configuration time specifies the time period between two runs. When the
# scrubber wakes up, it fetches and scrubs all ``pending_delete`` images that
# are available for scrubbing after taking ``scrub_time`` into consideration.
#
# If the wakeup time is set to a large number, there may be a large number of
# images to be scrubbed for each run. Also, this impacts how quickly the backend
# storage is reclaimed.
#
# Possible values:
# * Any non-negative integer
#
# Related options:
# * ``daemon``
# * ``delayed_delete``
#
# (integer value)
# Minimum value: 0
#wakeup_time = 300
#
# Run scrubber as a daemon.
#
# This boolean configuration option indicates whether scrubber should
# run as a long-running process that wakes up at regular intervals to
# scrub images. The wake up interval can be specified using the
# configuration option ``wakeup_time``.
#
# If this configuration option is set to ``False``, which is the
# default value, scrubber runs once to scrub images and exits. In this
# case, if the operator wishes to implement continuous scrubbing of
# images, scrubber needs to be scheduled as a cron job.
#
# Possible values:
# * True
# * False
#
# Related options:
# * ``wakeup_time``
#
# (boolean value)
#daemon = false
#
# Protocol to use for communication with the registry server.
#
# Provide a string value representing the protocol to use for
# communication with the registry server. By default, this option is
# set to ``http`` and the connection is not secure.
#
# This option can be set to ``https`` to establish a secure connection
# to the registry server. In this case, provide a key to use for the
# SSL connection using the ``registry_client_key_file`` option. Also
# include the CA file and cert file using the options
# ``registry_client_ca_file`` and ``registry_client_cert_file``
# respectively.
#
# Possible values:
# * http
# * https
#
# Related options:
# * registry_client_key_file
# * registry_client_cert_file
# * registry_client_ca_file
#
# (string value)
# Allowed values: http, https
#registry_client_protocol = http
#
# Absolute path to the private key file.
#
# Provide a string value representing a valid absolute path to the
# private key file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
# environment variable may be set to a filepath of the key file.
#
# Possible values:
# * String value representing a valid absolute path to the key
# file.
#
# Related options:
# * registry_client_protocol
#
# (string value)
#registry_client_key_file = /etc/ssl/key/key-file.pem
#
# Absolute path to the certificate file.
#
# Provide a string value representing a valid absolute path to the
# certificate file to use for establishing a secure connection to
# the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
# environment variable may be set to a filepath of the certificate
# file.
#
# Possible values:
# * String value representing a valid absolute path to the
# certificate file.
#
# Related options:
# * registry_client_protocol
#
# (string value)
#registry_client_cert_file = /etc/ssl/certs/file.crt
#
# Absolute path to the Certificate Authority file.
#
# Provide a string value representing a valid absolute path to the
# certificate authority file to use for establishing a secure
# connection to the registry server.
#
# NOTE: This option must be set if ``registry_client_protocol`` is
# set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
# environment variable may be set to a filepath of the CA file.
# This option is ignored if the ``registry_client_insecure`` option
# is set to ``True``.
#
# Possible values:
# * String value representing a valid absolute path to the CA
# file.
#
# Related options:
# * registry_client_protocol
# * registry_client_insecure
#
# (string value)
#registry_client_ca_file = /etc/ssl/cafile/file.ca
#
# Set verification of the registry server certificate.
#
# Provide a boolean value to determine whether or not to validate
# SSL connections to the registry server. By default, this option
# is set to ``False`` and the SSL connections are validated.
#
# If set to ``True``, the connection to the registry server is not
# validated via a certifying authority and the
# ``registry_client_ca_file`` option is ignored. This is the
# registry's equivalent of specifying --insecure on the command line
# using glanceclient for the API.
#
# Possible values:
# * True
# * False
#
# Related options:
# * registry_client_protocol
# * registry_client_ca_file
#
# (boolean value)
#registry_client_insecure = false
#
# Timeout value for registry requests.
#
# Provide an integer value representing the period of time in seconds
# that the API server will wait for a registry request to complete.
# The default value is 600 seconds.
#
# A value of 0 implies that a request will never timeout.
#
# Possible values:
# * Zero
# * Positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#registry_client_timeout = 600
# DEPRECATED: Whether to pass through the user token when making requests to the
# registry. To prevent failures with token expiration during big files upload,
# it is recommended to set this parameter to False.If "use_user_token" is not in
# effect, then admin credentials can be specified. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#use_user_token = true
# DEPRECATED: The administrators user name. If "use_user_token" is not in
# effect, then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_user = <None>
# DEPRECATED: The administrators password. If "use_user_token" is not in effect,
# then admin credentials can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_password = <None>
# DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
# not in effect, then admin tenant name can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#admin_tenant_name = <None>
# DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
# effect and using keystone auth, then URL of keystone can be specified. (string
# value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_url = <None>
# DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
# in effect, then auth strategy can be specified. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_strategy = noauth
# DEPRECATED: The region for the authentication service. If "use_user_token" is
# not in effect and using keystone auth, then region name can be specified.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: This option was considered harmful and has been deprecated in M
# release. It will be removed in O release. For more information read OSSN-0060.
# Related functionality with uploading big images has been implemented with
# Keystone trusts support.
#auth_region = <None>
#
# Address the registry server is hosted on.
#
# Possible values:
# * A valid IP or hostname
#
# Related options:
# * None
#
# (string value)
#registry_host = 0.0.0.0
#
# Port the registry server is listening on.
#
# Possible values:
# * A valid port number
#
# Related options:
# * None
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#registry_port = 9191
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and Linux
# platform is used. This option is ignored if log_config_append is set. (boolean
# value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append is
# set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is
# DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[database]
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect the
# database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set by
# the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of 0
# indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost. (boolean
# value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
#
# From oslo.db.concurrency
#
# Enable the experimental use of thread pooling for all DB API calls (boolean
# value)
# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
#use_tpool = false
[glance_store]
#
# From glance.store
#
#
# List of enabled Glance stores.
#
# Register the storage backends to use for storing disk images
# as a comma separated list. The default stores enabled for
# storing disk images with Glance are ``file`` and ``http``.
#
# Possible values:
# * A comma separated list that could include:
# * file
# * http
# * swift
# * rbd
# * sheepdog
# * cinder
# * vmware
#
# Related Options:
# * default_store
#
# (list value)
#stores = file,http
#
# The default scheme to use for storing images.
#
# Provide a string value representing the default scheme to use for
# storing images. If not set, Glance uses ``file`` as the default
# scheme to store images with the ``file`` store.
#
# NOTE: The value given for this configuration option must be a valid
# scheme for a store registered with the ``stores`` configuration
# option.
#
# Possible values:
# * file
# * filesystem
# * http
# * https
# * swift
# * swift+http
# * swift+https
# * swift+config
# * rbd
# * sheepdog
# * cinder
# * vsphere
#
# Related Options:
# * stores
#
# (string value)
# Allowed values: file, filesystem, http, https, swift, swift+http, swift+https, swift+config, rbd, sheepdog, cinder, vsphere
#default_store = file
#
# Minimum interval in seconds to execute updating dynamic storage
# capabilities based on current backend status.
#
# Provide an integer value representing time in seconds to set the
# minimum interval before an update of dynamic storage capabilities
# for a storage backend can be attempted. Setting
# ``store_capabilities_update_min_interval`` does not mean updates
# occur periodically based on the set interval. Rather, the update
# is performed at the elapse of this interval set, if an operation
# of the store is triggered.
#
# By default, this option is set to zero and is disabled. Provide an
# integer value greater than zero to enable this option.
#
# NOTE: For more information on store capabilities and their updates,
# please visit: https://specs.openstack.org/openstack/glance-specs/specs/kilo
# /store-capabilities.html
#
# For more information on setting up a particular store in your
# deplyment and help with the usage of this feature, please contact
# the storage driver maintainers listed here:
# http://docs.openstack.org/developer/glance_store/drivers/index.html
#
# Possible values:
# * Zero
# * Positive integer
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#store_capabilities_update_min_interval = 0
#
# Information to match when looking for cinder in the service catalog.
#
# When the ``cinder_endpoint_template`` is not set and any of
# ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, ``cinder_store_password`` is not set,
# cinder store uses this information to lookup cinder endpoint from the service
# catalog in the current context. ``cinder_os_region_name``, if set, is taken
# into consideration to fetch the appropriate endpoint.
#
# The service catalog can be listed by the ``openstack catalog list`` command.
#
# Possible values:
# * A string of of the following form:
# ``<service_type>:<service_name>:<endpoint_type>``
# At least ``service_type`` and ``endpoint_type`` should be specified.
# ``service_name`` can be omitted.
#
# Related options:
# * cinder_os_region_name
# * cinder_endpoint_template
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
# * cinder_store_password
#
# (string value)
#cinder_catalog_info = volumev2::publicURL
#
# Override service catalog lookup with template for cinder endpoint.
#
# When this option is set, this value is used to generate cinder endpoint,
# instead of looking up from the service catalog.
# This value is ignored if ``cinder_store_auth_address``,
# ``cinder_store_user_name``, ``cinder_store_project_name``, and
# ``cinder_store_password`` are specified.
#
# If this configuration option is set, ``cinder_catalog_info`` will be ignored.
#
# Possible values:
# * URL template string for cinder endpoint, where ``%%(tenant)s`` is
# replaced with the current tenant (project) name.
# For example: ``http://cinder.openstack.example.org/v2/%%(tenant)s``
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
# * cinder_store_password
# * cinder_catalog_info
#
# (string value)
#cinder_endpoint_template = <None>
#
# Region name to lookup cinder service from the service catalog.
#
# This is used only when ``cinder_catalog_info`` is used for determining the
# endpoint. If set, the lookup for cinder endpoint by this node is filtered to
# the specified region. It is useful when multiple regions are listed in the
# catalog. If this is not set, the endpoint is looked up from every region.
#
# Possible values:
# * A string that is a valid region name.
#
# Related options:
# * cinder_catalog_info
#
# (string value)
# Deprecated group/name - [glance_store]/os_region_name
#cinder_os_region_name = <None>
#
# Location of a CA certificates file used for cinder client requests.
#
# The specified CA certificates file, if set, is used to verify cinder
# connections via HTTPS endpoint. If the endpoint is HTTP, this value is
# ignored.
# ``cinder_api_insecure`` must be set to ``True`` to enable the verification.
#
# Possible values:
# * Path to a ca certificates file
#
# Related options:
# * cinder_api_insecure
#
# (string value)
#cinder_ca_certificates_file = <None>
#
# Number of cinderclient retries on failed http calls.
#
# When a call failed by any errors, cinderclient will retry the call up to the
# specified times after sleeping a few seconds.
#
# Possible values:
# * A positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#cinder_http_retries = 3
#
# Time period, in seconds, to wait for a cinder volume transition to
# complete.
#
# When the cinder volume is created, deleted, or attached to the glance node to
# read/write the volume data, the volume's state is changed. For example, the
# newly created volume status changes from ``creating`` to ``available`` after
# the creation process is completed. This specifies the maximum time to wait for
# the status change. If a timeout occurs while waiting, or the status is changed
# to an unexpected value (e.g. `error``), the image creation fails.
#
# Possible values:
# * A positive integer
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 0
#cinder_state_transition_timeout = 300
#
# Allow to perform insecure SSL requests to cinder.
#
# If this option is set to True, HTTPS endpoint connection is verified using the
# CA certificates file specified by ``cinder_ca_certificates_file`` option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * cinder_ca_certificates_file
#
# (boolean value)
#cinder_api_insecure = false
#
# The address where the cinder authentication service is listening.
#
# When all of ``cinder_store_auth_address``, ``cinder_store_user_name``,
# ``cinder_store_project_name``, and ``cinder_store_password`` options are
# specified, the specified values are always used for the authentication.
# This is useful to hide the image volumes from users by storing them in a
# project/tenant specific to the image service. It also enables users to share
# the image volume among other projects under the control of glance's ACL.
#
# If either of these options are not set, the cinder endpoint is looked up
# from the service catalog, and current context's user and project are used.
#
# Possible values:
# * A valid authentication service address, for example:
# ``http://openstack.example.org/identity/v2.0``
#
# Related options:
# * cinder_store_user_name
# * cinder_store_password
# * cinder_store_project_name
#
# (string value)
#cinder_store_auth_address = <None>
#
# User name to authenticate against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
# * A valid user name
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_password
# * cinder_store_project_name
#
# (string value)
#cinder_store_user_name = <None>
#
# Password for the user authenticating against cinder.
#
# This must be used with all the following related options. If any of these are
# not specified, the user of the current context is used.
#
# Possible values:
# * A valid password for the user specified by ``cinder_store_user_name``
#
# Related options:
# * cinder_store_auth_address
# * cinder_store_user_name
# * cinder_store_project_name
#
# (string value)
#cinder_store_password = <None>
#
# Project name where the image volume is stored in cinder.
#
# If this configuration option is not set, the project in current context is
# used.
#
# This must be used with all the following related options. If any of these are
# not specified, the project of the current context is used.
#
# Possible values:
# * A valid project name
#
# Related options:
# * ``cinder_store_auth_address``
# * ``cinder_store_user_name``
# * ``cinder_store_password``
#
# (string value)
#cinder_store_project_name = <None>
#
# Path to the rootwrap configuration file to use for running commands as root.
#
# The cinder store requires root privileges to operate the image volumes (for
# connecting to iSCSI/FC volumes and reading/writing the volume data, etc.).
# The configuration file should allow the required commands by cinder store and
# os-brick library.
#
# Possible values:
# * Path to the rootwrap config file
#
# Related options:
# * None
#
# (string value)
#rootwrap_config = /etc/glance/rootwrap.conf
#
# Directory to which the filesystem backend store writes images.
#
# Upon start up, Glance creates the directory if it doesn't already
# exist and verifies write access to the user under which
# ``glance-api`` runs. If the write access isn't available, a
# ``BadStoreConfiguration`` exception is raised and the filesystem
# store may not be available for adding new images.
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
# * A valid path to a directory
#
# Related options:
# * ``filesystem_store_datadirs``
# * ``filesystem_store_file_perm``
#
# (string value)
#filesystem_store_datadir = /var/lib/glance/images
#
# List of directories and their priorities to which the filesystem
# backend store writes images.
#
# The filesystem store can be configured to store images in multiple
# directories as opposed to using a single directory specified by the
# ``filesystem_store_datadir`` configuration option. When using
# multiple directories, each directory can be given an optional
# priority to specify the preference order in which they should
# be used. Priority is an integer that is concatenated to the
# directory path with a colon where a higher value indicates higher
# priority. When two directories have the same priority, the directory
# with most free space is used. When no priority is specified, it
# defaults to zero.
#
# More information on configuring filesystem store with multiple store
# directories can be found at
# http://docs.openstack.org/developer/glance/configuring.html
#
# NOTE: This directory is used only when filesystem store is used as a
# storage backend. Either ``filesystem_store_datadir`` or
# ``filesystem_store_datadirs`` option must be specified in
# ``glance-api.conf``. If both options are specified, a
# ``BadStoreConfiguration`` will be raised and the filesystem store
# may not be available for adding new images.
#
# Possible values:
# * List of strings of the following form:
# * ``<a valid directory path>:<optional integer priority>``
#
# Related options:
# * ``filesystem_store_datadir``
# * ``filesystem_store_file_perm``
#
# (multi valued)
#filesystem_store_datadirs =
#
# Filesystem store metadata file.
#
# The path to a file which contains the metadata to be returned with
# any location associated with the filesystem store. The file must
# contain a valid JSON object. The object should contain the keys
# ``id`` and ``mountpoint``. The value for both keys should be a
# string.
#
# Possible values:
# * A valid path to the store metadata file
#
# Related options:
# * None
#
# (string value)
#filesystem_store_metadata_file = <None>
#
# File access permissions for the image files.
#
# Set the intended file access permissions for image data. This provides
# a way to enable other services, e.g. Nova, to consume images directly
# from the filesystem store. The users running the services that are
# intended to be given access to could be made a member of the group
# that owns the files created. Assigning a value less then or equal to
# zero for this configuration option signifies that no changes be made
# to the default permissions. This value will be decoded as an octal
# digit.
#
# For more information, please refer the documentation at
# http://docs.openstack.org/developer/glance/configuring.html
#
# Possible values:
# * A valid file access permission
# * Zero
# * Any negative integer
#
# Related options:
# * None
#
# (integer value)
#filesystem_store_file_perm = 0
#
# Path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Certificate Authority file to verify the remote server certificate. If
# this option is set, the ``https_insecure`` option will be ignored and
# the CA file specified will be used to authenticate the server
# certificate and establish a secure connection to the server.
#
# Possible values:
# * A valid path to a CA file
#
# Related options:
# * https_insecure
#
# (string value)
#https_ca_certificates_file = <None>
#
# Set verification of the remote server certificate.
#
# This configuration option takes in a boolean value to determine
# whether or not to verify the remote server certificate. If set to
# True, the remote server certificate is not verified. If the option is
# set to False, then the default CA truststore is used for verification.
#
# This option is ignored if ``https_ca_certificates_file`` is set.
# The remote server certificate will then be verified using the file
# specified using the ``https_ca_certificates_file`` option.
#
# Possible values:
# * True
# * False
#
# Related options:
# * https_ca_certificates_file
#
# (boolean value)
#https_insecure = true
#
# The http/https proxy information to be used to connect to the remote
# server.
#
# This configuration option specifies the http/https proxy information
# that should be used to connect to the remote server. The proxy
# information should be a key value pair of the scheme and proxy, for
# example, http:10.0.0.1:3128. You can also specify proxies for multiple
# schemes by separating the key value pairs with a comma, for example,
# http:10.0.0.1:3128, https:10.0.0.1:1080.
#
# Possible values:
# * A comma separated list of scheme:proxy pairs as described above
#
# Related options:
# * None
#
# (dict value)
#http_proxy_information =
#
# Size, in megabytes, to chunk RADOS images into.
#
# Provide an integer value representing the size in megabytes to chunk
# Glance images into. The default chunk size is 8 megabytes. For optimal
# performance, the value should be a power of two.
#
# When Ceph's RBD object storage system is used as the storage backend
# for storing Glance images, the images are chunked into objects of the
# size set using this option. These chunked objects are then stored
# across the distributed block data store to use for Glance.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#rbd_store_chunk_size = 8
#
# RADOS pool in which images are stored.
#
# When RBD is used as the storage backend for storing Glance images, the
# images are stored by means of logical grouping of the objects (chunks
# of images) into a ``pool``. Each pool is defined with the number of
# placement groups it can contain. The default pool that is used is
# 'images'.
#
# More information on the RBD storage backend can be found here:
# http://ceph.com/planet/how-data-is-stored-in-ceph-cluster/
#
# Possible Values:
# * A valid pool name
#
# Related options:
# * None
#
# (string value)
#rbd_store_pool = images
#
# RADOS user to authenticate as.
#
# This configuration option takes in the RADOS user to authenticate as.
# This is only needed when RADOS authentication is enabled and is
# applicable only if the user is using Cephx authentication. If the
# value for this option is not set by the user or is set to None, a
# default value will be chosen, which will be based on the client.
# section in rbd_store_ceph_conf.
#
# Possible Values:
# * A valid RADOS user
#
# Related options:
# * rbd_store_ceph_conf
#
# (string value)
#rbd_store_user = <None>
#
# Ceph configuration file path.
#
# This configuration option takes in the path to the Ceph configuration
# file to be used. If the value for this option is not set by the user
# or is set to None, librados will locate the default configuration file
# which is located at /etc/ceph/ceph.conf. If using Cephx
# authentication, this file should include a reference to the right
# keyring in a client.<USER> section
#
# Possible Values:
# * A valid path to a configuration file
#
# Related options:
# * rbd_store_user
#
# (string value)
#rbd_store_ceph_conf = /etc/ceph/ceph.conf
#
# Timeout value for connecting to Ceph cluster.
#
# This configuration option takes in the timeout value in seconds used
# when connecting to the Ceph cluster i.e. it sets the time to wait for
# glance-api before closing the connection. This prevents glance-api
# hangups during the connection to RBD. If the value for this option
# is set to less than or equal to 0, no timeout is set and the default
# librados value is used.
#
# Possible Values:
# * Any integer value
#
# Related options:
# * None
#
# (integer value)
#rados_connect_timeout = 0
#
# Chunk size for images to be stored in Sheepdog data store.
#
# Provide an integer value representing the size in mebibyte
# (1048576 bytes) to chunk Glance images into. The default
# chunk size is 64 mebibytes.
#
# When using Sheepdog distributed storage system, the images are
# chunked into objects of this size and then stored across the
# distributed data store to use for Glance.
#
# Chunk sizes, if a power of two, help avoid fragmentation and
# enable improved performance.
#
# Possible values:
# * Positive integer value representing size in mebibytes.
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 1
#sheepdog_store_chunk_size = 64
#
# Port number on which the sheep daemon will listen.
#
# Provide an integer value representing a valid port number on
# which you want the Sheepdog daemon to listen on. The default
# port is 7000.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages it receives on
# the port number set using ``sheepdog_store_port`` option to store
# chunks of Glance images.
#
# Possible values:
# * A valid port number (0 to 65535)
#
# Related Options:
# * sheepdog_store_address
#
# (port value)
# Minimum value: 0
# Maximum value: 65535
#sheepdog_store_port = 7000
#
# Address to bind the Sheepdog daemon to.
#
# Provide a string value representing the address to bind the
# Sheepdog daemon to. The default address set for the 'sheep'
# is 127.0.0.1.
#
# The Sheepdog daemon, also called 'sheep', manages the storage
# in the distributed cluster by writing objects across the storage
# network. It identifies and acts on the messages directed to the
# address set using ``sheepdog_store_address`` option to store
# chunks of Glance images.
#
# Possible values:
# * A valid IPv4 address
# * A valid IPv6 address
# * A valid hostname
#
# Related Options:
# * sheepdog_store_port
#
# (string value)
#sheepdog_store_address = 127.0.0.1
#
# Set verification of the server certificate.
#
# This boolean determines whether or not to verify the server
# certificate. If this option is set to True, swiftclient won't check
# for a valid SSL certificate when authenticating. If the option is set
# to False, then the default CA truststore is used for verification.
#
# Possible values:
# * True
# * False
#
# Related options:
# * swift_store_cacert
#
# (boolean value)
#swift_store_auth_insecure = false
#
# Path to the CA bundle file.
#
# This configuration option enables the operator to specify the path to
# a custom Certificate Authority file for SSL verification when
# connecting to Swift.
#
# Possible values:
# * A valid path to a CA file
#
# Related options:
# * swift_store_auth_insecure
#
# (string value)
#swift_store_cacert = /etc/ssl/certs/ca-certificates.crt
#
# The region of Swift endpoint to use by Glance.
#
# Provide a string value representing a Swift region where Glance
# can connect to for image storage. By default, there is no region
# set.
#
# When Glance uses Swift as the storage backend to store images
# for a specific tenant that has multiple endpoints, setting of a
# Swift region with ``swift_store_region`` allows Glance to connect
# to Swift in the specified region as opposed to a single region
# connectivity.
#
# This option can be configured for both single-tenant and
# multi-tenant storage.
#
# NOTE: Setting the region with ``swift_store_region`` is
# tenant-specific and is necessary ``only if`` the tenant has
# multiple endpoints across different regions.
#
# Possible values:
# * A string value representing a valid Swift region.
#
# Related Options:
# * None
#
# (string value)
#swift_store_region = RegionTwo
#
# The URL endpoint to use for Swift backend storage.
#
# Provide a string value representing the URL endpoint to use for
# storing Glance images in Swift store. By default, an endpoint
# is not set and the storage URL returned by ``auth`` is used.
# Setting an endpoint with ``swift_store_endpoint`` overrides the
# storage URL and is used for Glance image storage.
#
# NOTE: The URL should include the path up to, but excluding the
# container. The location of an object is obtained by appending
# the container and object to the configured URL.
#
# Possible values:
# * String value representing a valid URL path up to a Swift container
#
# Related Options:
# * None
#
# (string value)
#swift_store_endpoint = https://swift.openstack.example.org/v1/path_not_including_container_name
#
# Endpoint Type of Swift service.
#
# This string value indicates the endpoint type to use to fetch the
# Swift endpoint. The endpoint type determines the actions the user will
# be allowed to perform, for instance, reading and writing to the Store.
# This setting is only used if swift_store_auth_version is greater than
# 1.
#
# Possible values:
# * publicURL
# * adminURL
# * internalURL
#
# Related options:
# * swift_store_endpoint
#
# (string value)
# Allowed values: publicURL, adminURL, internalURL
#swift_store_endpoint_type = publicURL
#
# Type of Swift service to use.
#
# Provide a string value representing the service type to use for
# storing images while using Swift backend storage. The default
# service type is set to ``object-store``.
#
# NOTE: If ``swift_store_auth_version`` is set to 2, the value for
# this configuration option needs to be ``object-store``. If using
# a higher version of Keystone or a different auth scheme, this
# option may be modified.
#
# Possible values:
# * A string representing a valid service type for Swift storage.
#
# Related Options:
# * None
#
# (string value)
#swift_store_service_type = object-store
#
# Name of single container to store images/name prefix for multiple containers
#
# When a single container is being used to store images, this configuration
# option indicates the container within the Glance account to be used for
# storing all images. When multiple containers are used to store images, this
# will be the name prefix for all containers. Usage of single/multiple
# containers can be controlled using the configuration option
# ``swift_store_multiple_containers_seed``.
#
# When using multiple containers, the containers will be named after the value
# set for this configuration option with the first N chars of the image UUID
# as the suffix delimited by an underscore (where N is specified by
# ``swift_store_multiple_containers_seed``).
#
# Example: if the seed is set to 3 and swift_store_container = ``glance``, then
# an image with UUID ``fdae39a1-bac5-4238-aba4-69bcc726e848`` would be placed in
# the container ``glance_fda``. All dashes in the UUID are included when
# creating the container name but do not count toward the character limit, so
# when N=10 the container name would be ``glance_fdae39a1-ba.``
#
# Possible values:
# * If using single container, this configuration option can be any string
# that is a valid swift container name in Glance's Swift account
# * If using multiple containers, this configuration option can be any
# string as long as it satisfies the container naming rules enforced by
# Swift. The value of ``swift_store_multiple_containers_seed`` should be
# taken into account as well.
#
# Related options:
# * ``swift_store_multiple_containers_seed``
# * ``swift_store_multi_tenant``
# * ``swift_store_create_container_on_put``
#
# (string value)
#swift_store_container = glance
#
# The size threshold, in MB, after which Glance will start segmenting image
# data.
#
# Swift has an upper limit on the size of a single uploaded object. By default,
# this is 5GB. To upload objects bigger than this limit, objects are segmented
# into multiple smaller objects that are tied together with a manifest file.
# For more detail, refer to
# http://docs.openstack.org/developer/swift/overview_large_objects.html
#
# This configuration option specifies the size threshold over which the Swift
# driver will start segmenting image data into multiple smaller files.
# Currently, the Swift driver only supports creating Dynamic Large Objects.
#
# NOTE: This should be set by taking into account the large object limit
# enforced by the Swift cluster in consideration.
#
# Possible values:
# * A positive integer that is less than or equal to the large object limit
# enforced by the Swift cluster in consideration.
#
# Related options:
# * ``swift_store_large_object_chunk_size``
#
# (integer value)
# Minimum value: 1
#swift_store_large_object_size = 5120
#
# The maximum size, in MB, of the segments when image data is segmented.
#
# When image data is segmented to upload images that are larger than the limit
# enforced by the Swift cluster, image data is broken into segments that are no
# bigger than the size specified by this configuration option.
# Refer to ``swift_store_large_object_size`` for more detail.
#
# For example: if ``swift_store_large_object_size`` is 5GB and
# ``swift_store_large_object_chunk_size`` is 1GB, an image of size 6.2GB will be
# segmented into 7 segments where the first six segments will be 1GB in size and
# the seventh segment will be 0.2GB.
#
# Possible values:
# * A positive integer that is less than or equal to the large object limit
# enforced by Swift cluster in consideration.
#
# Related options:
# * ``swift_store_large_object_size``
#
# (integer value)
# Minimum value: 1
#swift_store_large_object_chunk_size = 200
#
# Create container, if it doesn't already exist, when uploading image.
#
# At the time of uploading an image, if the corresponding container doesn't
# exist, it will be created provided this configuration option is set to True.
# By default, it won't be created. This behavior is applicable for both single
# and multiple containers mode.
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#swift_store_create_container_on_put = false
#
# Store images in tenant's Swift account.
#
# This enables multi-tenant storage mode which causes Glance images to be stored
# in tenant specific Swift accounts. If this is disabled, Glance stores all
# images in its own account. More details multi-tenant store can be found at
# https://wiki.openstack.org/wiki/GlanceSwiftTenantSpecificStorage
#
# Possible values:
# * True
# * False
#
# Related options:
# * None
#
# (boolean value)
#swift_store_multi_tenant = false
#
# Seed indicating the number of containers to use for storing images.
#
# When using a single-tenant store, images can be stored in one or more than one
# containers. When set to 0, all images will be stored in one single container.
# When set to an integer value between 1 and 32, multiple containers will be
# used to store images. This configuration option will determine how many
# containers are created. The total number of containers that will be used is
# equal to 16^N, so if this config option is set to 2, then 16^2=256 containers
# will be used to store images.
#
# Please refer to ``swift_store_container`` for more detail on the naming
# convention. More detail about using multiple containers can be found at
# https://specs.openstack.org/openstack/glance-specs/specs/kilo/swift-store-
# multiple-containers.html
#
# NOTE: This is used only when swift_store_multi_tenant is disabled.
#
# Possible values:
# * A non-negative integer less than or equal to 32
#
# Related options:
# * ``swift_store_container``
# * ``swift_store_multi_tenant``
# * ``swift_store_create_container_on_put``
#
# (integer value)
# Minimum value: 0
# Maximum value: 32
#swift_store_multiple_containers_seed = 0
#
# List of tenants that will be granted admin access.
#
# This is a list of tenants that will be granted read/write access on
# all Swift containers created by Glance in multi-tenant mode. The
# default value is an empty list.
#
# Possible values:
# * A comma separated list of strings representing UUIDs of Keystone
# projects/tenants
#
# Related options:
# * None
#
# (list value)
#swift_store_admin_tenants =
#
# SSL layer compression for HTTPS Swift requests.
#
# Provide a boolean value to determine whether or not to compress
# HTTPS Swift requests for images at the SSL layer. By default,
# compression is enabled.
#
# When using Swift as the backend store for Glance image storage,
# SSL layer compression of HTTPS Swift requests can be set using
# this option. If set to False, SSL layer compression of HTTPS
# Swift requests is disabled. Disabling this option may improve
# performance for images which are already in a compressed format,
# for example, qcow2.
#
# Possible values:
# * True
# * False
#
# Related Options:
# * None
#
# (boolean value)
#swift_store_ssl_compression = true
#
# The number of times a Swift download will be retried before the
# request fails.
#
# Provide an integer value representing the number of times an image
# download must be retried before erroring out. The default value is
# zero (no retry on a failed image download). When set to a positive
# integer value, ``swift_store_retry_get_count`` ensures that the
# download is attempted this many more times upon a download failure
# before sending an error message.
#
# Possible values:
# * Zero
# * Positive integer value
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#swift_store_retry_get_count = 0
#
# Time in seconds defining the size of the window in which a new
# token may be requested before the current token is due to expire.
#
# Typically, the Swift storage driver fetches a new token upon the
# expiration of the current token to ensure continued access to
# Swift. However, some Swift transactions (like uploading image
# segments) may not recover well if the token expires on the fly.
#
# Hence, by fetching a new token before the current token expiration,
# we make sure that the token does not expire or is close to expiry
# before a transaction is attempted. By default, the Swift storage
# driver requests for a new token 60 seconds or less before the
# current token expiration.
#
# Possible values:
# * Zero
# * Positive integer value
#
# Related Options:
# * None
#
# (integer value)
# Minimum value: 0
#swift_store_expire_soon_interval = 60
#
# Use trusts for multi-tenant Swift store.
#
# This option instructs the Swift store to create a trust for each
# add/get request when the multi-tenant store is in use. Using trusts
# allows the Swift store to avoid problems that can be caused by an
# authentication token expiring during the upload or download of data.
#
# By default, ``swift_store_use_trusts`` is set to ``True``(use of
# trusts is enabled). If set to ``False``, a user token is used for
# the Swift connection instead, eliminating the overhead of trust
# creation.
#
# NOTE: This option is considered only when
# ``swift_store_multi_tenant`` is set to ``True``
#
# Possible values:
# * True
# * False
#
# Related options:
# * swift_store_multi_tenant
#
# (boolean value)
#swift_store_use_trusts = true
#
# Reference to default Swift account/backing store parameters.
#
# Provide a string value representing a reference to the default set
# of parameters required for using swift account/backing store for
# image storage. The default reference value for this configuration
# option is 'ref1'. This configuration option dereferences the
# parameters and facilitates image storage in Swift storage backend
# every time a new image is added.
#
# Possible values:
# * A valid string value
#
# Related options:
# * None
#
# (string value)
#default_swift_reference = ref1
# DEPRECATED: Version of the authentication service to use. Valid versions are 2
# and 3 for keystone and 1 (deprecated) for swauth and rackspace. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_version' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_version = 2
# DEPRECATED: The address where the Swift authentication service is listening.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'auth_address' in the Swift back-end configuration file is
# used instead.
#swift_store_auth_address = <None>
# DEPRECATED: The user to authenticate against the Swift authentication service.
# (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'user' in the Swift back-end configuration file is set instead.
#swift_store_user = <None>
# DEPRECATED: Auth key for the user authenticating against the Swift
# authentication service. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason:
# The option 'key' in the Swift back-end configuration file is used
# to set the authentication key instead.
#swift_store_key = <None>
#
# Absolute path to the file containing the swift account(s)
# configurations.
#
# Include a string value representing the path to a configuration
# file that has references for each of the configured Swift
# account(s)/backing stores. By default, no file path is specified
# and customized Swift referencing is disabled. Configuring this
# option is highly recommended while using Swift storage backend for
# image storage as it avoids storage of credentials in the database.
#
# Possible values:
# * String value representing an absolute path on the glance-api
# node
#
# Related options:
# * None
#
# (string value)
#swift_store_config_file = <None>
#
# Address of the ESX/ESXi or vCenter Server target system.
#
# This configuration option sets the address of the ESX/ESXi or vCenter
# Server target system. This option is required when using the VMware
# storage backend. The address can contain an IP address (127.0.0.1) or
# a DNS name (www.my-domain.com).
#
# Possible Values:
# * A valid IPv4 or IPv6 address
# * A valid DNS name
#
# Related options:
# * vmware_server_username
# * vmware_server_password
#
# (string value)
#vmware_server_host = 127.0.0.1
#
# Server username.
#
# This configuration option takes the username for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
# * Any string that is the username for a user with appropriate
# privileges
#
# Related options:
# * vmware_server_host
# * vmware_server_password
#
# (string value)
#vmware_server_username = root
#
# Server password.
#
# This configuration option takes the password for authenticating with
# the VMware ESX/ESXi or vCenter Server. This option is required when
# using the VMware storage backend.
#
# Possible Values:
# * Any string that is a password corresponding to the username
# specified using the "vmware_server_username" option
#
# Related options:
# * vmware_server_host
# * vmware_server_username
#
# (string value)
#vmware_server_password = vmware
#
# The number of VMware API retries.
#
# This configuration option specifies the number of times the VMware
# ESX/VC server API must be retried upon connection related issues or
# server API call overload. It is not possible to specify 'retry
# forever'.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#vmware_api_retry_count = 10
#
# Interval in seconds used for polling remote tasks invoked on VMware
# ESX/VC server.
#
# This configuration option takes in the sleep time in seconds for polling an
# on-going async task as part of the VMWare ESX/VC server API call.
#
# Possible Values:
# * Any positive integer value
#
# Related options:
# * None
#
# (integer value)
# Minimum value: 1
#vmware_task_poll_interval = 5
#
# The directory where the glance images will be stored in the datastore.
#
# This configuration option specifies the path to the directory where the
# glance images will be stored in the VMware datastore. If this option
# is not set, the default directory where the glance images are stored
# is openstack_glance.
#
# Possible Values:
# * Any string that is a valid path to a directory
#
# Related options:
# * None
#
# (string value)
#vmware_store_image_dir = /openstack_glance
#
# Set verification of the ESX/vCenter server certificate.
#
# This configuration option takes a boolean value to determine
# whether or not to verify the ESX/vCenter server certificate. If this
# option is set to True, the ESX/vCenter server certificate is not
# verified. If this option is set to False, then the default CA
# truststore is used for verification.
#
# This option is ignored if the "vmware_ca_file" option is set. In that
# case, the ESX/vCenter server certificate will then be verified using
# the file specified using the "vmware_ca_file" option .
#
# Possible Values:
# * True
# * False
#
# Related options:
# * vmware_ca_file
#
# (boolean value)
# Deprecated group/name - [glance_store]/vmware_api_insecure
#vmware_insecure = false
#
# Absolute path to the CA bundle file.
#
# This configuration option enables the operator to use a custom
# Cerificate Authority File to verify the ESX/vCenter certificate.
#
# If this option is set, the "vmware_insecure" option will be ignored
# and the CA file specified will be used to authenticate the ESX/vCenter
# server certificate and establish a secure connection to the server.
#
# Possible Values:
# * Any string that is a valid absolute path to a CA file
#
# Related options:
# * vmware_insecure
#
# (string value)
#vmware_ca_file = /etc/ssl/certs/ca-certificates.crt
#
# The datastores where the image can be stored.
#
# This configuration option specifies the datastores where the image can
# be stored in the VMWare store backend. This option may be specified
# multiple times for specifying multiple datastores. The datastore name
# should be specified after its datacenter path, separated by ":". An
# optional weight may be given after the datastore name, separated again
# by ":" to specify the priority. Thus, the required format becomes
# <datacenter_path>:<datastore_name>:<optional_weight>.
#
# When adding an image, the datastore with highest weight will be
# selected, unless there is not enough free space available in cases
# where the image size is already known. If no weight is given, it is
# assumed to be zero and the directory will be considered for selection
# last. If multiple datastores have the same weight, then the one with
# the most free space available is selected.
#
# Possible Values:
# * Any string of the format:
# <datacenter_path>:<datastore_name>:<optional_weight>
#
# Related options:
# * None
#
# (multi valued)
#vmware_datastores =
[oslo_concurrency]
#
# From oslo.concurrency
#
# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false
# Directory to use for lock files. For security, the specified directory should
# only be writable by the user running the processes that need locking. Defaults
# to environment variable OSLO_LOCK_PATH. If external locks are used, a lock
# path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
# glance-swift.conf.sample
#
# This file is an example config file when
# multiple swift accounts/backing stores are enabled.
#
# Specify the reference name in []
# For each section, specify the auth_address, user and key.
#
# WARNING:
# * If any of auth_address, user or key is not specified,
# the glance-api's swift store will fail to configure
[ref1]
user = tenant:user1
key = key1
auth_version = 2
auth_address = http://localhost:5000/v2.0
[ref2]
user = project_name:user_name2
key = key2
user_domain_id = default
project_domain_id = default
auth_version = 3
auth_address = http://localhost:5000/v3
The ovf-metadata.json
file specifies the OVF properties of interest
for the OVF processing task. Configure this to extract metadata from an
OVF and create corresponding properties on an image for the Image service.
Currently, the task supports only the extraction of properties
from the CIM_ProcessorAllocationSettingData
namespace,
CIM schema.
{
"cim_pasd": [
"ProcessorArchitecture",
"InstructionSet",
"InstructionSetExtensionName"
]
}
The /etc/glance/policy.json
file defines additional access controls that
apply to the Image service.
{
"context_is_admin": "role:admin",
"default": "role:admin",
"add_image": "",
"delete_image": "",
"get_image": "",
"get_images": "",
"modify_image": "",
"publicize_image": "role:admin",
"copy_from": "",
"download_image": "",
"upload_image": "",
"delete_image_location": "",
"get_image_location": "",
"set_image_location": "",
"add_member": "",
"delete_member": "",
"get_member": "",
"get_members": "",
"modify_member": "",
"manage_image_cache": "role:admin",
"get_task": "role:admin",
"get_tasks": "role:admin",
"add_task": "role:admin",
"modify_task": "role:admin",
"deactivate": "",
"reactivate": "",
"get_metadef_namespace": "",
"get_metadef_namespaces":"",
"modify_metadef_namespace":"",
"add_metadef_namespace":"",
"get_metadef_object":"",
"get_metadef_objects":"",
"modify_metadef_object":"",
"add_metadef_object":"",
"list_metadef_resource_types":"",
"get_metadef_resource_type":"",
"add_metadef_resource_type_association":"",
"get_metadef_property":"",
"get_metadef_properties":"",
"modify_metadef_property":"",
"add_metadef_property":"",
"get_metadef_tag":"",
"get_metadef_tags":"",
"modify_metadef_tag":"",
"add_metadef_tag":"",
"add_metadef_tags":""
}
# property-protections-policies.conf.sample
#
# This file is an example config file for when
# property_protection_rule_format=policies is enabled.
#
# Specify regular expression for which properties will be protected in []
# For each section, specify CRUD permissions. You may refer to policies defined
# in policy.json.
# The property rules will be applied in the order specified. Once
# a match is found the remaining property rules will not be applied.
#
# WARNING:
# * If the reg ex specified below does not compile, then
# the glance-api service fails to start. (Guide for reg ex python compiler
# used:
# http://docs.python.org/2/library/re.html#regular-expression-syntax)
# * If an operation(create, read, update, delete) is not specified or misspelt
# then the glance-api service fails to start.
# So, remember, with GREAT POWER comes GREAT RESPONSIBILITY!
#
# NOTE: Only one policy can be specified per action. If multiple policies are
# specified, then the glance-api service fails to start.
[^x_.*]
create = default
read = default
update = default
delete = default
[.*]
create = context_is_admin
read = context_is_admin
update = context_is_admin
delete = context_is_admin
# property-protections-roles.conf.sample
#
# This file is an example config file for when
# property_protection_rule_format=roles is enabled.
#
# Specify regular expression for which properties will be protected in []
# For each section, specify CRUD permissions.
# The property rules will be applied in the order specified. Once
# a match is found the remaining property rules will not be applied.
#
# WARNING:
# * If the reg ex specified below does not compile, then
# glance-api service will not start. (Guide for reg ex python compiler used:
# http://docs.python.org/2/library/re.html#regular-expression-syntax)
# * If an operation(create, read, update, delete) is not specified or misspelt
# then the glance-api service will not start.
# So, remember, with GREAT POWER comes GREAT RESPONSIBILITY!
#
# NOTE: Multiple roles can be specified for a given operation. These roles must
# be comma separated.
[^x_.*]
create = admin,member,_member_
read = admin,member,_member_
update = admin,member,_member_
delete = admin,member,_member_
[.*]
create = admin
read = admin
update = admin
delete = admin
Option = default value | (Type) Help string |
---|---|
[DEFAULT] secure_proxy_ssl_header = None |
(StrOpt) The HTTP header used to determine the scheme for the original request, even if it was removed by an SSL terminating proxy. Typical value is “HTTP_X_FORWARDED_PROTO”. |
[profiler] connection_string = messaging:// |
(StrOpt) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: * messaging://: use oslo_messaging driver for sending notifications. |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] ca_file |
None |
/etc/ssl/cafile |
[DEFAULT] cert_file |
None |
/etc/ssl/certs |
[DEFAULT] key_file |
None |
/etc/ssl/key/key-file.pem |
[DEFAULT] pydev_worker_debug_host |
None |
localhost |
[DEFAULT] registry_client_ca_file |
None |
/etc/ssl/cafile/file.ca |
[DEFAULT] registry_client_cert_file |
None |
/etc/ssl/certs/file.crt |
[DEFAULT] registry_client_key_file |
None |
/etc/ssl/key/key-file.pem |
[image_format] disk_formats |
ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, iso |
ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso |
[paste_deploy] config_file |
None |
glance-api-paste.ini |
[paste_deploy] flavor |
None |
keystone |
[task] work_dir |
None |
/work_dir |
[taskflow_executor] conversion_format |
None |
raw |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
Compute relies on an external image service to store virtual machine images and maintain a catalog of available images. By default, Compute is configured to use the Image service (glance), which is currently the only supported image service.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The zaqar.conf
configuration file is an
INI file format
as explained in Configuration file format.
This file is located in /etc/zaqar
. If there is a file zaqar.conf
in
~/.zaqar
directory, it is used instead of the one in /etc/zaqar
directory. When you manually install the Message service, you must generate
the zaqar.conf file using the config samples generator located inside Zaqar
installation directory and customize it according to your preferences.
To generate the sample configuration file zaqar/etc/zaqar.conf.sample
:
# pip install tox
$ cd zaqar
$ tox -e genconfig
Where zaqar
is your Message service installation directory.
Then copy Message service configuration sample to the directory /etc/zaqar
:
# cp etc/zaqar.conf.sample /etc/zaqar/zaqar.conf
For a list of configuration options, see the tables in this guide.
Important
Do not specify quotes around configuration options.
Configuration options are grouped by section. Message service configuration file supports the following sections:
The Message service has two APIs: the HTTP REST API for WSGI transport driver, and the Websocket API for Websocket transport driver. The Message service can use only one transport driver at the same time. See Drivers options for driver options.
The functionality and behavior of the APIs are defined by API versions. For example, the Websocket API v2 acts the same as the HTTP REST API v2. For now there are v1, v1.1 and v2 versions of HTTP REST API and only v2 version of Websocket API.
Permission control options in each API version:
admin_mode
option which controls the global
permission to access the pools and flavors functionality.policy_default_rule
, policy_dirs
,
policy_file
which controls the permissions to access each type of
functionality for different types of users. See The policy.json file.secret_key
option which defines a secret key to use for signing
special URLs. These are called pre-signed URLs and give temporary
permissions to outsiders of the system.The Message service can be configured by changing the following options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
admin_mode = False |
(Boolean) Activate privileged endpoints. |
enable_deprecated_api_versions = |
(List) List of deprecated API versions to enable. |
unreliable = False |
(Boolean) Disable all reliability constraints. |
[notification] | |
max_notifier_workers = 10 |
(Integer) The max amount of the notification workers. |
require_confirmation = False |
(Boolean) Whether the http/https/email subscription need to be confirmed before notification. |
smtp_command = /usr/sbin/sendmail -t -oi |
(String) The command of smtp to send email. The format is “command_name arg1 arg2”. |
[signed_url] | |
secret_key = None |
(String) Secret key used to encrypt pre-signed URLs. |
The transport and storage drivers used by the Message service are determined by the following options:
Configuration option = Default value | Description |
---|---|
[drivers] | |
management_store = mongodb |
(String) Storage driver to use as the management store. |
message_store = mongodb |
(String) Storage driver to use as the messaging store. |
transport = wsgi |
(String) Transport driver to use. |
The Message service supports several different storage back ends (storage drivers) for storing management information, messages and their metadata. The recommended storage back end is MongoDB. For information on how to specify the storage back ends, see Drivers options.
When the storage back end is chosen, the corresponding back-end options become
active. For example, if Redis is chosen as the management storage back end, the
options in [drivers:management_store:redis]
section become active.
A pipeline is a set of stages needed to process a request. When a new request
comes to the Message service, first it goes through the transport layer
pipeline and then through one of the storage layer pipelines depending on the
type of operation of each particular request. For example, if the Message
service receives a request to make a queue-related operation, the storage
layer pipeline will be queue pipeline
. The Message service always has the
actual storage controller as the final storage layer pipeline stage.
By setting the options in the [storage]
section of zaqar.conf
,
you can add additional stages to these storage layer pipelines:
zaqar.notification.notifier
- sends notifications to the queue
subscribers on each incoming message to the queue, in other words, enables
notifications functionality.The storage layer pipelines options are empty by default, because additional stages can affect the performance of the Message service. Depending on the stages, the sequence in which the option values are listed does matter or not.
You can add external stages to the storage layer pipelines. For information how to write and add your own external stages, see Writing stages for the storage pipelines tutorial.
The following tables detail the available options:
Configuration option = Default value | Description |
---|---|
[storage] | |
claim_pipeline = |
(List) Pipeline to use for processing claim operations. This pipeline will be consumed before calling the storage driver’s controller methods. |
message_pipeline = |
(List) Pipeline to use for processing message operations. This pipeline will be consumed before calling the storage driver’s controller methods. |
queue_pipeline = |
(List) Pipeline to use for processing queue operations. This pipeline will be consumed before calling the storage driver’s controller methods. |
subscription_pipeline = |
(List) Pipeline to use for processing subscription operations. This pipeline will be consumed before calling the storage driver’s controller methods. |
Configuration option = Default value | Description |
---|---|
[drivers:management_store:mongodb] | |
database = zaqar |
(String) Database name. |
max_attempts = 1000 |
(Integer) Maximum number of times to retry a failed operation. Currently only used for retrying a message post. |
max_reconnect_attempts = 10 |
(Integer) Maximum number of times to retry an operation that failed due to a primary node failover. |
max_retry_jitter = 0.005 |
(Floating point) Maximum jitter interval, to be added to the sleep interval, in order to decrease probability that parallel requests will retry at the same instant. |
max_retry_sleep = 0.1 |
(Floating point) Maximum sleep interval between retries (actual sleep time increases linearly according to number of attempts performed). |
reconnect_sleep = 0.02 |
(Floating point) Base sleep interval between attempts to reconnect after a primary node failover. The actual sleep time increases exponentially (power of 2) each time the operation is retried. |
ssl_ca_certs = None |
(String) The ca_certs file contains a set of concatenated “certification authority” certificates, which are used to validate certificates passed from the other end of the connection. |
ssl_cert_reqs = CERT_REQUIRED |
(String) Specifies whether a certificate is required from the other side of the connection, and whether it will be validated if provided. It must be one of the three values CERT_NONE``(certificates ignored), ``CERT_OPTIONAL``(not required, but validated if provided), or ``CERT_REQUIRED``(required and validated). If the value of this parameter is not ``CERT_NONE , then the ssl_ca_cert parameter must point to a file of CA certificates. |
ssl_certfile = None |
(String) The certificate file used to identify the local connection against mongod. |
ssl_keyfile = None |
(String) The private keyfile used to identify the local connection against mongod. If included with the certifle then only the ssl_certfile is needed. |
uri = None |
(String) Mongodb Connection URI. If ssl connection enabled, then ssl_keyfile , ssl_certfile , ssl_cert_reqs , ssl_ca_certs need to be set accordingly. |
[drivers:message_store:mongodb] | |
database = zaqar |
(String) Database name. |
max_attempts = 1000 |
(Integer) Maximum number of times to retry a failed operation. Currently only used for retrying a message post. |
max_reconnect_attempts = 10 |
(Integer) Maximum number of times to retry an operation that failed due to a primary node failover. |
max_retry_jitter = 0.005 |
(Floating point) Maximum jitter interval, to be added to the sleep interval, in order to decrease probability that parallel requests will retry at the same instant. |
max_retry_sleep = 0.1 |
(Floating point) Maximum sleep interval between retries (actual sleep time increases linearly according to number of attempts performed). |
partitions = 2 |
(Integer) Number of databases across which to partition message data, in order to reduce writer lock %. DO NOT change this setting after initial deployment. It MUST remain static. Also, you should not need a large number of partitions to improve performance, esp. if deploying MongoDB on SSD storage. |
reconnect_sleep = 0.02 |
(Floating point) Base sleep interval between attempts to reconnect after a primary node failover. The actual sleep time increases exponentially (power of 2) each time the operation is retried. |
ssl_ca_certs = None |
(String) The ca_certs file contains a set of concatenated “certification authority” certificates, which are used to validate certificates passed from the other end of the connection. |
ssl_cert_reqs = CERT_REQUIRED |
(String) Specifies whether a certificate is required from the other side of the connection, and whether it will be validated if provided. It must be one of the three values CERT_NONE``(certificates ignored), ``CERT_OPTIONAL``(not required, but validated if provided), or ``CERT_REQUIRED``(required and validated). If the value of this parameter is not ``CERT_NONE , then the ssl_ca_cert parameter must point to a file of CA certificates. |
ssl_certfile = None |
(String) The certificate file used to identify the local connection against mongod. |
ssl_keyfile = None |
(String) The private keyfile used to identify the local connection against mongod. If included with the certifle then only the ssl_certfile is needed. |
uri = None |
(String) Mongodb Connection URI. If ssl connection enabled, then ssl_keyfile , ssl_certfile , ssl_cert_reqs , ssl_ca_certs need to be set accordingly. |
Configuration option = Default value | Description |
---|---|
[drivers:management_store:redis] | |
max_reconnect_attempts = 10 |
(Integer) Maximum number of times to retry an operation that failed due to a redis node failover. |
reconnect_sleep = 1.0 |
(Floating point) Base sleep interval between attempts to reconnect after a redis node failover. |
uri = redis://127.0.0.1:6379 |
(String) Redis connection URI, taking one of three forms. For a direct connection to a Redis server, use the form “redis://host[:port][?options]”, where port defaults to 6379 if not specified. For an HA master-slave Redis cluster using Redis Sentinel, use the form “redis://host1[:port1][,host2[:port2],...,hostN[:portN]][?options]”, where each host specified corresponds to an instance of redis-sentinel. In this form, the name of the Redis master used in the Sentinel configuration must be included in the query string as “master=<name>”. Finally, to connect to a local instance of Redis over a unix socket, you may use the form “redis:/path/to/redis.sock[?options]”. In all forms, the “socket_timeout” option may be specified in the query string. Its value is given in seconds. If not provided, “socket_timeout” defaults to 0.1 seconds. |
[drivers:message_store:redis] | |
max_reconnect_attempts = 10 |
(Integer) Maximum number of times to retry an operation that failed due to a redis node failover. |
reconnect_sleep = 1.0 |
(Floating point) Base sleep interval between attempts to reconnect after a redis node failover. |
uri = redis://127.0.0.1:6379 |
(String) Redis connection URI, taking one of three forms. For a direct connection to a Redis server, use the form “redis://host[:port][?options]”, where port defaults to 6379 if not specified. For an HA master-slave Redis cluster using Redis Sentinel, use the form “redis://host1[:port1][,host2[:port2],...,hostN[:portN]][?options]”, where each host specified corresponds to an instance of redis-sentinel. In this form, the name of the Redis master used in the Sentinel configuration must be included in the query string as “master=<name>”. Finally, to connect to a local instance of Redis over a unix socket, you may use the form “redis:/path/to/redis.sock[?options]”. In all forms, the “socket_timeout” option may be specified in the query string. Its value is given in seconds. If not provided, “socket_timeout” defaults to 0.1 seconds. |
Configuration option = Default value | Description |
---|---|
[drivers:management_store:sqlalchemy] | |
uri = sqlite:///:memory: |
(String) An sqlalchemy URL |
The Message service uses WSGI as the default transport mechanism. The following tables detail the available options:
Configuration option = Default value | Description |
---|---|
[transport] | |
default_claim_grace = 60 |
(Integer) Defines the message grace period in seconds. |
default_claim_ttl = 300 |
(Integer) Defines how long a message will be in claimed state. |
default_message_ttl = 3600 |
(Integer) Defines how long a message will be accessible. |
default_subscription_ttl = 3600 |
(Integer) Defines how long a subscription will be available. |
max_claim_grace = 43200 |
(Integer) Defines the maximum message grace period in seconds. |
max_claim_ttl = 43200 |
(Integer) Maximum length of a message in claimed state. |
max_message_ttl = 1209600 |
(Integer) Maximum amount of time a message will be available. |
max_messages_per_claim_or_pop = 20 |
(Integer) The maximum number of messages that can be claimed (OR) popped in a single request |
max_messages_per_page = 20 |
(Integer) Defines the maximum number of messages per page. |
max_messages_post_size = 262144 |
(Integer) Defines the maximum size of message posts. |
max_queue_metadata = 65536 |
(Integer) Defines the maximum amount of metadata in a queue. |
max_queues_per_page = 20 |
(Integer) Defines the maximum number of queues per page. |
max_subscriptions_per_page = 20 |
(Integer) Defines the maximum number of subscriptions per page. |
subscriber_types = http, https, mailto, trust+http, trust+https |
(List) Defines supported subscriber types. |
Configuration option = Default value | Description |
---|---|
[drivers:transport:wsgi] | |
bind = 127.0.0.1 |
(IP) Address on which the self-hosting server will listen. |
port = 8888 |
(Port number) Port on which the self-hosting server will listen. |
Configuration option = Default value | Description |
---|---|
[drivers:transport:websocket] | |
bind = 127.0.0.1 |
(IP) Address on which the self-hosting server will listen. |
external_port = None |
(Port number) Port on which the service is provided to the user. |
port = 9000 |
(Port number) Port on which the self-hosting server will listen. |
The notifications feature in the Message service can be enabled by adding
zaqar.notification.notifier
stage to the message storage layer pipeline. To
do it, ensure that zaqar.notification.notifier
is added to
message_pipeline
option in the [storage]
section of zaqar.conf
:
[storage]
message_pipeline = zaqar.notification.notifier
For more information about storage layer pipelines, see Storage drivers options.
All requests to the API may only be performed by an authenticated agent.
The preferred authentication system is the OpenStack Identity service, code-named keystone.
To authenticate, an agent issues an authentication request to an Identity service endpoint. In response to valid credentials, Identity service responds with an authentication token and a service catalog that contains a list of all services and endpoints available for the given token.
Multiple endpoints may be returned for Message service according to physical locations and performance/availability characteristics of different deployments.
Normally, Identity service middleware provides the X-Project-Id
header
based on the authentication token submitted by the Message service client.
For this to work, clients must specify a valid authentication token in the
X-Auth-Token
header for each request to the Message service API. The API
validates authentication tokens against Identity service before servicing each
request.
If authentication is not enabled, clients must provide the X-Project-Id
header themselves.
Configure the authentication and authorization strategy through these options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
auth_strategy = |
(String) Backend to use for authentication. For no auth, keep it empty. Existing strategies: keystone. See also the keystone_authtoken section below |
Configuration option = Default value | Description |
---|---|
[trustee] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
auth_url = None |
(Unknown) Authentication URL |
default_domain_id = None |
(Unknown) Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
default_domain_name = None |
(Unknown) Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
domain_id = None |
(Unknown) Domain ID to scope to |
domain_name = None |
(Unknown) Domain name to scope to |
password = None |
(Unknown) User’s password |
project_domain_id = None |
(Unknown) Domain ID containing project |
project_domain_name = None |
(Unknown) Domain name containing project |
project_id = None |
(Unknown) Project ID to scope to |
project_name = None |
(Unknown) Project name to scope to |
trust_id = None |
(Unknown) Trust ID |
user_domain_id = None |
(Unknown) User’s domain id |
user_domain_name = None |
(Unknown) User’s domain name |
user_id = None |
(Unknown) User id |
username = None |
(Unknown) Username |
The Message service supports pooling.
Pooling aims to make the Message service highly scalable without losing any of its flexibility by allowing users to use multiple back ends.
You can enable and configure pooling with the following options:
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
pooling = False |
(Boolean) Enable pooling across multiple storage backends. If pooling is enabled, the storage driver configuration is used to determine where the catalogue/control plane data is kept. |
[pooling:catalog] | |
enable_virtual_pool = False |
(Boolean) If enabled, the message_store will be used as the storage for the virtual pool. |
The corresponding log file of each Messaging service is stored in the
/var/log/zaqar/
directory of the host on which each service runs.
Log filename | Service that logs to the file |
---|---|
server.log |
Messaging service |
Option = default value | (Type) Help string |
---|---|
[DEFAULT] enable_deprecated_api_versions = |
(ListOpt) List of deprecated API versions to enable. |
[notification] max_notifier_workers = 10 |
(IntOpt) The max amount of the notification workers. |
[notification] require_confirmation = False |
(BoolOpt) Whether the http/https/email subscription need to be confirmed before notification. |
[trustee] auth_section = None |
(Opt) Config Section from which to load plugin specific options |
[trustee] auth_type = None |
(Opt) Authentication type to load |
[trustee] auth_url = None |
(Opt) Authentication URL |
[trustee] default_domain_id = None |
(Opt) Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
[trustee] default_domain_name = None |
(Opt) Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. |
[trustee] domain_id = None |
(Opt) Domain ID to scope to |
[trustee] domain_name = None |
(Opt) Domain name to scope to |
[trustee] password = None |
(Opt) User’s password |
[trustee] project_domain_id = None |
(Opt) Domain ID containing project |
[trustee] project_domain_name = None |
(Opt) Domain name containing project |
[trustee] project_id = None |
(Opt) Project ID to scope to |
[trustee] project_name = None |
(Opt) Project name to scope to |
[trustee] trust_id = None |
(Opt) Trust ID |
[trustee] user_domain_id = None |
(Opt) User’s domain id |
[trustee] user_domain_name = None |
(Opt) User’s domain name |
[trustee] user_id = None |
(Opt) User id |
[trustee] username = None |
(Opt) Username |
Option | Previous default value | New default value |
---|---|---|
[transport] subscriber_types |
http, https, mailto |
http, https, mailto, trust+http, trust+https |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
The Message service is multi-tenant, fast, reliable, and scalable. It allows developers to share data between distributed application components performing different tasks, without losing messages or requiring each component to be always available.
The service features a RESTful API, which developers can use to send messages between various components of their SaaS and mobile applications, by using a variety of communication patterns.
The Message service provides the following key features:
The Message service contains the following components:
To configure your Message service installation, you must define configuration options in these files:
zaqar.conf
. Contains most of the Message service configuration options.
Resides in the /etc/zaqar
directory. If there is a file zaqar.conf
in ~/.zaqar
directory, it is used instead of the one in /etc/zaqar
directory.policy.json
. Contains RBAC policy for all actions. Only applies to API
v2. Resides in the /etc/zaqar
directory. If there is a file
policy.json
in ~/.zaqar
directory, it is used instead of the one in
/etc/zaqar
directory. See The policy.json file.Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The options and descriptions listed in this introduction are auto generated from the code in the Networking service project, which provides software-defined networking between VMs run in Compute. The list contains common options, while the subsections list the options for the various networking plug-ins.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
agent_down_time = 75 |
(Integer) Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent is down for good. |
allow_automatic_dhcp_failover = True |
(Boolean) Automatically remove networks from offline DHCP agents. |
allow_automatic_l3agent_failover = False |
(Boolean) Automatically reschedule routers from offline L3 agents to online L3 agents. |
api_workers = None |
(Integer) Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance. |
auth_ca_cert = None |
(String) Certificate Authority public key (CA cert) file for ssl |
auth_strategy = keystone |
(String) The type of authentication to use |
base_mac = fa:16:3e:00:00:00 |
(String) The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. |
bind_host = 0.0.0.0 |
(String) The host IP to bind to |
bind_port = 9696 |
(Port number) The port to bind to |
cache_url = |
(String) DEPRECATED: URL to connect to the cache back end. This option is deprecated in the Newton release and will be removed. Please add a [cache] group for oslo.cache in your neutron.conf and add “enable” and “backend” options in this section. |
core_plugin = None |
(String) The core plugin Neutron will use |
default_availability_zones = |
(List) Default value of availability zone hints. The availability zone aware schedulers use this when the resources availability_zone_hints is empty. Multiple availability zones can be specified by a comma separated string. This value can be empty. In this case, even if availability_zone_hints for a resource is empty, availability zone is considered for high availability while scheduling the resource. |
dhcp_agent_notification = True |
(Boolean) Allow sending resource operation notification to DHCP agent |
dhcp_agents_per_network = 1 |
(Integer) Number of DHCP agents scheduled to host a tenant network. If this number is greater than 1, the scheduler automatically assigns multiple DHCP agents for a given tenant network, providing high availability for DHCP service. |
dhcp_broadcast_reply = False |
(Boolean) Use broadcast in DHCP replies. |
dhcp_confs = $state_path/dhcp |
(String) Location to store DHCP server config files. |
dhcp_domain = openstacklocal |
(String) DEPRECATED: Domain to use for building the hostnames. This option is deprecated. It has been moved to neutron.conf as dns_domain. It will be removed in a future release. |
dhcp_lease_duration = 86400 |
(Integer) DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite lease times. |
dhcp_load_type = networks |
(String) Representing the resource type whose load is being reported by the agent. This can be “networks”, “subnets” or “ports”. When specified (Default is networks), the server will extract particular load sent as part of its agent configuration object from the agent report state, which is the number of resources being consumed, at every report_interval.dhcp_load_type can be used in combination with network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler When the network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured to represent the choice for the resource being balanced. Example: dhcp_load_type=networks |
dns_domain = openstacklocal |
(String) Domain to use for building the hostnames |
enable_new_agents = True |
(Boolean) Agent starts with admin_state_up=False when enable_new_agents=False. In the case, user’s resources will not be scheduled automatically to the agent until admin changes admin_state_up to True. |
enable_services_on_agents_with_admin_state_down = False |
(Boolean) Enable services on an agent with admin_state_up False. If this option is False, when admin_state_up of an agent is turned False, services on it will be disabled. Agents with admin_state_up False are not selected for automatic scheduling regardless of this option. But manual scheduling to such agents is available if this option is True. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
external_dns_driver = None |
(String) Driver for external DNS integration. |
global_physnet_mtu = 1500 |
(Integer) MTU of the underlying physical network. Neutron uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, neutron uses this value without modification. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol overhead from this value. Defaults to 1500, the standard value for Ethernet. |
ip_lib_force_root = False |
(Boolean) Force ip_lib calls to use the root helper |
ipam_driver = internal |
(String) Neutron IPAM (IP address management) driver to use. By default, the reference implementation of the Neutron IPAM driver is used. |
mac_generation_retries = 16 |
(Integer) DEPRECATED: How many times Neutron will retry MAC generation. This option is now obsolete and so is deprecated to be removed in the Ocata release. |
max_allowed_address_pair = 10 |
(Integer) Maximum number of allowed address pairs |
max_dns_nameservers = 5 |
(Integer) Maximum number of DNS nameservers per subnet |
max_fixed_ips_per_port = 5 |
(Integer) DEPRECATED: Maximum number of fixed ips per port. This option is deprecated and will be removed in the Ocata release. |
max_rtr_adv_interval = 100 |
(Integer) MaxRtrAdvInterval setting for radvd.conf |
max_subnet_host_routes = 20 |
(Integer) Maximum number of host routes per subnet |
min_rtr_adv_interval = 30 |
(Integer) MinRtrAdvInterval setting for radvd.conf |
periodic_fuzzy_delay = 5 |
(Integer) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) |
periodic_interval = 40 |
(Integer) Seconds between running periodic tasks. |
report_interval = 300 |
(Integer) Interval between two metering reports |
state_path = /var/lib/neutron |
(String) Where to store Neutron state files. This directory must be writable by the agent. |
vlan_transparent = False |
(Boolean) If True, then allow plugins that support it to create VLAN transparent networks. |
web_framework = legacy |
(String) This will choose the web framework in which to run the Neutron API server. ‘pecan’ is a new experimental rewrite of the API server. |
[AGENT] | |
check_child_processes_action = respawn |
(String) Action to be executed when a child process dies |
check_child_processes_interval = 60 |
(Integer) Interval between checks of child process liveness (seconds), use 0 to disable |
debug_iptables_rules = False |
(Boolean) Duplicate every iptables difference calculation to ensure the format being generated matches the format of iptables-save. This option should not be turned on for production systems because it imposes a performance penalty. |
log_agent_heartbeats = False |
(Boolean) Log agent heartbeats |
polling_interval = 2 |
(Integer) The number of seconds the agent will wait between polling for local device changes. |
root_helper = sudo |
(String) Root helper application. Use ‘sudo neutron-rootwrap /etc/neutron/rootwrap.conf’ to use the real root filter facility. Change to ‘sudo’ to skip the filtering and just run the command directly. |
root_helper_daemon = None |
(String) Root helper daemon application to use when possible. |
[profiler] | |
connection_string = messaging:// |
(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values:
|
enabled = False |
(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values:
|
hmac_keys = SECRET_KEY |
(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
trace_sqlalchemy = False |
(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values:
|
[qos] | |
notification_drivers = message_queue |
(List) Drivers list to use to send the update notification |
[service_providers] | |
service_provider = [] |
(Multi-valued) Defines providers for advanced services using the format: <service_type>:<name>:<driver>[:default] |
OpenStack Networking introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some OpenStack Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow. These sections detail the configuration options for the various plug-ins.
The Modular Layer 2 (ml2) plug-in has two components: network types and mechanisms. You can configure these components separately. The ml2 plugin also allows administrators to perform a partial specification, where some options are specified explicitly in the configuration, and the remainder is allowed to be chosen automatically by the Compute service.
This section describes the available configuration options.
Note
OpenFlow Agent (ofagent) Mechanism driver has been removed as of Newton.
Configuration option = Default value | Description |
---|---|
[ml2] | |
extension_drivers = |
(List) An ordered list of extension driver entrypoints to be loaded from the neutron.ml2.extension_drivers namespace. For example: extension_drivers = port_security,qos |
external_network_type = None |
(String) Default network type for external networks when no provider attributes are specified. By default it is None, which means that if provider attributes are not specified while creating external networks then they will have the same type as tenant networks. Allowed values for external_network_type config option depend on the network type values configured in type_drivers config option. |
mechanism_drivers = |
(List) An ordered list of networking mechanism driver entrypoints to be loaded from the neutron.ml2.mechanism_drivers namespace. |
overlay_ip_version = 4 |
(Integer) IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6. |
path_mtu = 0 |
(Integer) Maximum size of an IP packet (MTU) that can traverse the underlying physical network infrastructure without fragmentation when using an overlay/tunnel protocol. This option allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. |
physical_network_mtus = |
(List) A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val>. This mapping allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. |
tenant_network_types = local |
(List) Ordered list of network_types to allocate as tenant networks. The default value ‘local’ is useful for single-box testing but provides no connectivity between hosts. |
type_drivers = local, flat, vlan, gre, vxlan, geneve |
(List) List of network type driver entrypoints to be loaded from the neutron.ml2.type_drivers namespace. |
Configuration option = Default value | Description |
---|---|
[ml2_type_flat] | |
flat_networks = * |
(List) List of physical_network names with which flat networks can be created. Use default ‘*’ to allow flat networks with arbitrary physical_network names. Use an empty list to disable flat networks. |
Configuration option = Default value | Description |
---|---|
[ml2_type_geneve] | |
max_header_size = 30 |
(Integer) Geneve encapsulation header size is dynamic, this value is used to calculate the maximum MTU for the driver. This is the sum of the sizes of the outer ETH + IP + UDP + GENEVE header sizes. The default size for this field is 50, which is the size of the Geneve header without any additional option headers. |
vni_ranges = |
(List) Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of Geneve VNI IDs that are available for tenant network allocation |
Configuration option = Default value | Description |
---|---|
[ml2_type_gre] | |
tunnel_id_ranges = |
(List) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation |
Configuration option = Default value | Description |
---|---|
[ml2_type_vlan] | |
network_vlan_ranges = |
(List) List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks. |
Configuration option = Default value | Description |
---|---|
[ml2_type_vxlan] | |
vni_ranges = |
(List) Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation |
vxlan_group = None |
(String) Multicast group for VXLAN. When configured, will enable sending all broadcast traffic to this multicast group. When left unconfigured, will disable multicast VXLAN mode. |
Configuration option = Default value | Description |
---|---|
[l2pop] | |
agent_boot_time = 180 |
(Integer) Delay within which agent is expected to update existing ports whent it restarts |
Configuration option = Default value | Description |
---|---|
[ml2_sriov] | |
supported_pci_vendor_devs = None |
(List) DEPRECATED: Comma-separated list of supported PCI vendor devices, as defined by vendor_id:product_id according to the PCI ID Repository. Default None accept all PCI vendor devicesDEPRECATED: This option is deprecated in the Newton release and will be removed in the Ocata release. Starting from Ocata the mechanism driver will accept all PCI vendor devices. |
Use the following options to alter agent-related settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
external_pids = $state_path/external/pids |
(String) Location to store child pid files |
[AGENT] | |
agent_type = Open vSwitch agent |
(String) DEPRECATED: Selects the Agent Type reported |
availability_zone = nova |
(String) Availability zone of this node |
Configuration option = Default value | Description |
---|---|
[agent] | |
extensions = |
(List) Extensions list to use |
Configuration option = Default value | Description |
---|---|
[AGENT] | |
prevent_arp_spoofing = True |
(Boolean) DEPRECATED: Enable suppression of ARP responses that don’t match an IP address that belongs to the port from which they originate. Note: This prevents the VMs attached to this agent from spoofing, it doesn’t protect them from other devices which have the capability to spoof (e.g. bare metal or VMs attached to agents without this flag set to True). Spoofing rules will not be added to any ports that have port security disabled. For LinuxBridge, this requires ebtables. For OVS, it requires a version that supports matching ARP headers. This option will be removed in Ocata so the only way to disable protection will be via the port security extension. |
quitting_rpc_timeout = 10 |
(Integer) Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won’t be changed |
[LINUX_BRIDGE] | |
bridge_mappings = |
(List) List of <physical_network>:<physical_bridge> |
physical_interface_mappings = |
(List) Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical network names to the agent’s node-specific physical network interfaces to be used for flat and VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. |
[VXLAN] | |
arp_responder = False |
(Boolean) Enable local ARP responder which provides local responses instead of performing ARP broadcast into the overlay. Enabling local ARP responder is not fully compatible with the allowed-address-pairs extension. |
enable_vxlan = True |
(Boolean) Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin using linuxbridge mechanism driver |
l2_population = False |
(Boolean) Extension to use alongside ml2 plugin’s l2population mechanism driver. It enables the plugin to populate VXLAN forwarding table. |
local_ip = None |
(IP) IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the ‘overlay_ip_version’ option in the ML2 plug-in configuration file on the neutron server node(s). |
tos = None |
(Integer) TOS for vxlan interface protocol packets. |
ttl = None |
(Integer) TTL for vxlan interface protocol packets. |
vxlan_group = 224.0.0.1 |
(String) Multicast group(s) for vxlan interface. A range of group addresses may be specified by using CIDR notation. Specifying a range allows different VNIs to use different group addresses, reducing or eliminating spurious broadcast traffic to the tunnel endpoints. To reserve a unique group for each possible (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on all the agents. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
ovs_integration_bridge = br-int |
(String) Name of Open vSwitch bridge to use |
ovs_use_veth = False |
(Boolean) Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. |
ovs_vsctl_timeout = 10 |
(Integer) Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs commands will fail with ALARMCLOCK error. |
[AGENT] | |
arp_responder = False |
(Boolean) Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2 l2population driver. Allows the switch (when supporting an overlay) to respond to an ARP request locally without performing a costly ARP broadcast into the overlay. |
dont_fragment = True |
(Boolean) Set or un-set the don’t fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel. |
drop_flows_on_start = False |
(Boolean) Reset flow table on start. Setting this to True will cause brief traffic interruption. |
enable_distributed_routing = False |
(Boolean) Make the l2 agent run in DVR mode. |
l2_population = False |
(Boolean) Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scalability. |
minimize_polling = True |
(Boolean) Minimize polling by monitoring ovsdb for interface changes. |
ovsdb_monitor_respawn_interval = 30 |
(Integer) The number of seconds to wait before respawning the ovsdb monitor after losing communication with it. |
prevent_arp_spoofing = True |
(Boolean) DEPRECATED: Enable suppression of ARP responses that don’t match an IP address that belongs to the port from which they originate. Note: This prevents the VMs attached to this agent from spoofing, it doesn’t protect them from other devices which have the capability to spoof (e.g. bare metal or VMs attached to agents without this flag set to True). Spoofing rules will not be added to any ports that have port security disabled. For LinuxBridge, this requires ebtables. For OVS, it requires a version that supports matching ARP headers. This option will be removed in Ocata so the only way to disable protection will be via the port security extension. |
quitting_rpc_timeout = 10 |
(Integer) Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won’t be changed |
tunnel_csum = False |
(Boolean) Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel. |
tunnel_types = |
(List) Network types supported by the agent (gre and/or vxlan). |
veth_mtu = 9000 |
(Integer) MTU size of veth interfaces |
vxlan_udp_port = 4789 |
(Port number) The UDP port to use for VXLAN tunnels. |
[OVS] | |
bridge_mappings = |
(List) Comma-separated list of <physical_network>:<bridge> tuples mapping physical network names to the agent’s node-specific Open vSwitch bridge names to be used for flat and VLAN networks. The length of bridge names should be no more than 11. Each bridge must exist, and should have a physical network interface configured as a port. All physical networks configured on the server should have mappings to appropriate bridges on each agent. Note: If you remove a bridge from this mapping, make sure to disconnect it from the integration bridge as it won’t be managed by the agent anymore. |
datapath_type = system |
(String) OVS datapath to use. ‘system’ is the default value and corresponds to the kernel datapath. To enable the userspace datapath set this value to ‘netdev’. |
int_peer_patch_port = patch-tun |
(String) Peer patch port in integration bridge for tunnel bridge. |
integration_bridge = br-int |
(String) Integration bridge to use. Do not change this parameter unless you have a good reason to. This is the name of the OVS integration bridge. There is one per hypervisor. The integration bridge acts as a virtual ‘patch bay’. All VM VIFs are attached to this bridge and then ‘patched’ according to their network connectivity. |
local_ip = None |
(IP) IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the ‘overlay_ip_version’ option in the ML2 plug-in configuration file on the neutron server node(s). |
of_connect_timeout = 30 |
(Integer) Timeout in seconds to wait for the local switch connecting the controller. Used only for ‘native’ driver. |
of_interface = native |
(String) OpenFlow interface to use. |
of_listen_address = 127.0.0.1 |
(IP) Address to listen on for OpenFlow connections. Used only for ‘native’ driver. |
of_listen_port = 6633 |
(Port number) Port to listen on for OpenFlow connections. Used only for ‘native’ driver. |
of_request_timeout = 10 |
(Integer) Timeout in seconds to wait for a single OpenFlow request. Used only for ‘native’ driver. |
ovsdb_connection = tcp:127.0.0.1:6640 |
(String) The connection string for the native OVSDB backend. Requires the native ovsdb_interface to be enabled. |
ovsdb_interface = native |
(String) The interface for interacting with the OVSDB |
tun_peer_patch_port = patch-int |
(String) Peer patch port in tunnel bridge for integration bridge. |
tunnel_bridge = br-tun |
(String) Tunnel bridge to use. |
use_veth_interconnection = False |
(Boolean) Use veths instead of patch ports to interconnect the integration bridge to physical networks. Support kernel without Open vSwitch patch port support so long as it is set to True. |
vhostuser_socket_dir = /var/run/openvswitch |
(String) OVS vhost-user socket directory. |
Configuration option = Default value | Description |
---|---|
[SRIOV_NIC] | |
exclude_devices = |
(List) Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping network_device to the agent’s node-specific list of virtual functions that should not be used for virtual networking. vfs_to_exclude is a semicolon-separated list of virtual functions to exclude from network_device. The network_device in the mapping should appear in the physical_device_mappings list. |
physical_device_mappings = |
(List) Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent’s node-specific physical network device interfaces of SR-IOV physical function to be used for VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. |
Configuration option = Default value | Description |
---|---|
[AGENT] | |
quitting_rpc_timeout = 10 |
(Integer) Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won’t be changed |
[macvtap] | |
physical_interface_mappings = |
(List) Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical network names to the agent’s node-specific physical network interfaces to be used for flat and VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
pd_confs = $state_path/pd |
(String) Location to store IPv6 PD files. |
pd_dhcp_driver = dibbler |
(String) Service to handle DHCPv6 Prefix delegation. |
vendor_pen = 8888 |
(String) A decimal value as Vendor’s Registered Private Enterprise Number as required by RFC3315 DUID-EN. |
Use the following options to alter API-related settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allow_bulk = True |
(Boolean) Allow the usage of the bulk API |
allow_pagination = True |
(Boolean) DEPRECATED: Allow the usage of the pagination. This option has been deprecated and will now be enabled unconditionally. |
allow_sorting = True |
(Boolean) DEPRECATED: Allow the usage of the sorting. This option has been deprecated and will now be enabled unconditionally. |
api_extensions_path = |
(String) The path for API extensions. Note that this can be a colon-separated list of paths. For example: api_extensions_path = extensions:/path/to/more/exts:/even/more/exts. The __path__ of neutron.extensions is appended to this, so if your extensions are in there you don’t need to specify them here. |
api_paste_config = api-paste.ini |
(String) File name for the paste.deploy config for api service |
backlog = 4096 |
(Integer) Number of backlog requests to configure the socket with |
client_socket_timeout = 900 |
(Integer) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ means wait forever. |
max_header_line = 16384 |
(Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). |
pagination_max_limit = -1 |
(String) The maximum number of items returned in a single response, value was ‘infinite’ or negative integer means no limit |
retry_until_window = 30 |
(Integer) Number of seconds to keep retrying to listen |
service_plugins = |
(List) The service plugins Neutron will use |
tcp_keepidle = 600 |
(Integer) Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. |
use_ssl = False |
(Boolean) Enable SSL on the API server |
wsgi_default_pool_size = 100 |
(Integer) Size of the pool of greenthreads used by wsgi |
wsgi_keep_alive = True |
(Boolean) If False, closes the client socket connection explicitly. |
wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f |
(String) A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[oslo_versionedobjects] | |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal |
Use the following options to alter Compute-related settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
notify_nova_on_port_data_changes = True |
(Boolean) Send notification to nova when port data (fixed_ips/floatingip) changes so nova can update its cache. |
notify_nova_on_port_status_changes = True |
(Boolean) Send notification to nova when port status changes |
nova_client_cert = |
(String) Client certificate for nova metadata api server. |
nova_client_priv_key = |
(String) Private key of client certificate. |
send_events_interval = 2 |
(Integer) Number of seconds between sending events to nova if there are any events to send. |
Use the following options to alter Database-related settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
advertise_mtu = True |
(Boolean) DEPRECATED: If True, advertise network MTU values if core plugin calculates them. MTU is advertised to running instances via DHCP and RA MTU options. |
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq |
(String) The driver used to manage the DHCP server. |
dnsmasq_base_log_dir = None |
(String) Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful for debugging issues with either DHCP or DNS. If this section is null, disable dnsmasq log. |
dnsmasq_config_file = |
(String) Override the default dnsmasq settings with this file. |
dnsmasq_dns_servers = |
(List) Comma-separated list of the DNS servers which will be used as forwarders. |
dnsmasq_lease_max = 16777216 |
(Integer) Limit number of leases to prevent a denial-of-service. |
dnsmasq_local_resolv = False |
(Boolean) Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. Effectively removes the ‘–no-resolv’ option from the dnsmasq process arguments. Adding custom DNS resolvers to the ‘dnsmasq_dns_servers’ option disables this feature. |
enable_isolated_metadata = False |
(Boolean) The DHCP server can assist with providing metadata support on isolated networks. Setting this value to True will cause the DHCP server to append specific host routes to the DHCP request. The metadata service will only be activated when the subnet does not contain any router port. The guest instance must be configured to request host routes via DHCP (Option 121). This option doesn’t have any effect when force_metadata is set to True. |
enable_metadata_network = False |
(Boolean) Allows for serving metadata requests coming from a dedicated metadata access network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected to a Neutron router from which the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in VMs, as they will be able to reach 169.254.169.254 through a router. This option requires enable_isolated_metadata = True. |
force_metadata = False |
(Boolean) In some cases the Neutron router is not present to provide the metadata IP but the DHCP server can be used to provide this info. Setting this value will force the DHCP server to append specific host routes to the DHCP request. If this option is set, then the metadata service will be activated for all the networks. |
host = example.domain |
(String) Hostname to be used by the Neutron server, agents and services running on this machine. All the agents and services running on this machine must use the same host value. |
interface_driver = None |
(String) The driver used to manage the virtual interface. |
num_sync_threads = 4 |
(Integer) Number of threads to use during sync process. Should not exceed connection pool size configured on server. |
resync_interval = 5 |
(Integer) The DHCP agent will resync its state with Neutron to recover from any transient notification or RPC errors. The interval is number of seconds between attempts. |
Use the following options to alter DVR-related settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
dvr_base_mac = fa:16:3f:00:00:00 |
(String) The base mac address used for unique DVR instances by Neutron. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. The ‘dvr_base_mac’ must be different from ‘base_mac’ to avoid mixing them up with MAC’s allocated for tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00. The default is 3 octet |
router_distributed = False |
(Boolean) System-wide flag to determine the type of router that tenants can create. Only admin can override. |
Use the following options to alter IPv6 RA settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
ra_confs = $state_path/ra |
(String) Location to store IPv6 RA config files |
Use the following options in the l3_agent.ini
file for the L3
agent.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
enable_snat_by_default = True |
(Boolean) Define the default value of enable_snat if not provided in external_gateway_info. |
external_network_bridge = |
(String) DEPRECATED: Name of bridge used for external network traffic. When this parameter is set, the L3 agent will plug an interface directly into an external bridge which will not allow any wiring by the L2 agent. Using this will result in incorrect port statuses. This option is deprecated and will be removed in Ocata. |
ha_confs_path = $state_path/ha_confs |
(String) Location to store keepalived/conntrackd config files |
ha_vrrp_advert_int = 2 |
(Integer) The advertisement interval in seconds |
ha_vrrp_auth_password = None |
(String) VRRP authentication password |
ha_vrrp_auth_type = PASS |
(String) VRRP authentication type |
host = example.domain |
(String) Hostname to be used by the Neutron server, agents and services running on this machine. All the agents and services running on this machine must use the same host value. |
interface_driver = None |
(String) The driver used to manage the virtual interface. |
ipv6_pd_enabled = False |
(Boolean) Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable environment. Users making subnet creation requests for IPv6 subnets without providing a CIDR or subnetpool ID will be given a CIDR via the Prefix Delegation mechanism. Note that enabling PD will override the behavior of the default IPv6 subnetpool. |
l3_ha = False |
(Boolean) Enable HA mode for virtual routers. |
l3_ha_net_cidr = 169.254.192.0/18 |
(String) Subnet used for the l3 HA admin network. |
l3_ha_network_physical_name = |
(String) The physical network name with which the HA network can be created. |
l3_ha_network_type = |
(String) The network type to use when creating the HA network for an HA router. By default or if empty, the first ‘tenant_network_types’ is used. This is helpful when the VRRP traffic should use a specific network which is not the default one. |
max_l3_agents_per_router = 3 |
(Integer) Maximum number of L3 agents which a HA router will be scheduled on. If it is set to 0 then the router will be scheduled on every agent. |
min_l3_agents_per_router = 2 |
(Integer) DEPRECATED: Minimum number of L3 agents that have to be available in order to allow a new HA router to be scheduled. This option is deprecated in the Newton release and will be removed for the Ocata release where the scheduling of new HA routers will always be allowed. |
[AGENT] | |
comment_iptables_rules = True |
(Boolean) Add comments to iptables rules. Set to false to disallow the addition of comments to generated iptables rules that describe each rule’s purpose. System must support the iptables comments module for addition of comments. |
use_helper_for_ns_read = True |
(Boolean) Use the root helper when listing the namespaces on a system. This may not be required depending on the security configuration. If the root helper is not required, set this to False for a performance improvement. |
Use the following options in the
metadata_agent.ini
file for the Metadata agent.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
metadata_backlog = 4096 |
(Integer) Number of backlog requests to configure the metadata server socket with |
metadata_proxy_group = |
(String) Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). |
metadata_proxy_shared_secret = |
(String) When proxying metadata requests, Neutron signs the Instance-ID header with a shared secret to prevent spoofing. You may select any string for a secret, but it must match here and in the configuration used by the Nova Metadata Server. NOTE: Nova uses the same config key, but in [neutron] section. |
metadata_proxy_socket = $state_path/metadata_proxy |
(String) Location of Metadata Proxy UNIX domain socket |
metadata_proxy_socket_mode = deduce |
(String) Metadata Proxy UNIX domain socket mode, 4 values allowed: ‘deduce’: deduce mode from metadata_proxy_user/group values, ‘user’: set metadata proxy socket mode to 0o644, to use when metadata_proxy_user is agent effective user or root, ‘group’: set metadata proxy socket mode to 0o664, to use when metadata_proxy_group is agent effective group or root, ‘all’: set metadata proxy socket mode to 0o666, to use otherwise. |
metadata_proxy_user = |
(String) User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). |
metadata_proxy_watch_log = None |
(Boolean) Enable/Disable log watch by metadata proxy. It should be disabled when metadata_proxy_user/group is not allowed to read/write its log file and copytruncate logrotate option must be used if logrotate is enabled on metadata proxy log files. Option default value is deduced from metadata_proxy_user: watch log is enabled if metadata_proxy_user is agent effective user id/name. |
metadata_workers = 0 |
(Integer) Number of separate worker processes for metadata server (defaults to half of the number of CPUs) |
nova_metadata_insecure = False |
(Boolean) Allow to perform insecure SSL (https) requests to nova metadata |
nova_metadata_ip = 127.0.0.1 |
(String) IP address used by Nova metadata server. |
nova_metadata_port = 8775 |
(Port number) TCP Port used by Nova metadata server. |
nova_metadata_protocol = http |
(String) Protocol to access nova metadata, http or https |
Note
Previously, the neutron metadata agent connected to a neutron server via REST API using a neutron client. This is ineffective because keystone is then fully involved into the authentication process and gets overloaded.
The neutron metadata agent has been reworked to use RPC by default to connect to a server since Kilo release. This is a typical way of interacting between neutron server and its agents. If neutron server does not support metadata RPC then neutron client will be used.
Warning
Do not run the neutron-ns-metadata-proxy
proxy namespace as
root on a node with the L3 agent running. In OpenStack Kilo and
newer, you can change the permissions of
neutron-ns-metadata-proxy
after the proxy installation using
the metadata_proxy_user
and metadata_proxy_group
options.
Use the following options in the metering_agent.ini
file for the
Metering agent.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
driver = neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver |
(String) Metering driver |
measure_interval = 30 |
(Integer) Interval between two metering measures |
[AGENT] | |
report_interval = 30 |
(Floating point) Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. |
Use the following options in the
neutron.conf
file to change nova-related settings.
Configuration option = Default value | Description |
---|---|
[nova] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
endpoint_type = public |
(String) Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. |
insecure = False |
(Boolean) Verify HTTPS connections. |
keyfile = None |
(String) PEM encoded client certificate key file |
region_name = None |
(String) Name of nova region to use. Useful if keystone manages more than one region. |
timeout = None |
(Integer) Timeout value for http requests |
Use the following options in the neutron.conf
file to change
policy settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allow_overlapping_ips = False |
(Boolean) Allow overlapping IP support in Neutron. Attention: the following parameter MUST be set to False if Neutron is being used in conjunction with Nova security groups. |
Use the following options in the neutron.conf
file for the quota
system.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
max_routes = 30 |
(Integer) Maximum number of routes per router |
[QUOTAS] | |
default_quota = -1 |
(Integer) Default number of resource allowed per tenant. A negative value means unlimited. |
quota_driver = neutron.db.quota.driver.DbQuotaDriver |
(String) Default driver to use for quota checks. |
quota_firewall = 10 |
(Integer) Number of firewalls allowed per tenant. A negative value means unlimited. |
quota_firewall_policy = 10 |
(Integer) Number of firewall policies allowed per tenant. A negative value means unlimited. |
quota_firewall_rule = 100 |
(Integer) Number of firewall rules allowed per tenant. A negative value means unlimited. |
quota_floatingip = 50 |
(Integer) Number of floating IPs allowed per tenant. A negative value means unlimited. |
quota_healthmonitor = -1 |
(Integer) Number of health monitors allowed per tenant. A negative value means unlimited. |
quota_listener = -1 |
(Integer) Number of Loadbalancer Listeners allowed per tenant. A negative value means unlimited. |
quota_loadbalancer = 10 |
(Integer) Number of LoadBalancers allowed per tenant. A negative value means unlimited. |
quota_member = -1 |
(Integer) Number of pool members allowed per tenant. A negative value means unlimited. |
quota_network = 10 |
(Integer) Number of networks allowed per tenant. A negative value means unlimited. |
quota_pool = 10 |
(Integer) Number of pools allowed per tenant. A negative value means unlimited. |
quota_port = 50 |
(Integer) Number of ports allowed per tenant. A negative value means unlimited. |
quota_rbac_policy = 10 |
(Integer) Default number of RBAC entries allowed per tenant. A negative value means unlimited. |
quota_router = 10 |
(Integer) Number of routers allowed per tenant. A negative value means unlimited. |
quota_security_group = 10 |
(Integer) Number of security groups allowed per tenant. A negative value means unlimited. |
quota_security_group_rule = 100 |
(Integer) Number of security rules allowed per tenant. A negative value means unlimited. |
quota_subnet = 10 |
(Integer) Number of subnets allowed per tenant, A negative value means unlimited. |
track_quota_usage = True |
(Boolean) Keep in track in the database of current resource quota usage. Plugins which do not leverage the neutron database should set this flag to False. |
Use the following options in the neutron.conf
file to change
scheduler settings.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
network_auto_schedule = True |
(Boolean) Allow auto scheduling networks to DHCP agent. |
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler |
(String) Driver to use for scheduling network to DHCP agent |
router_auto_schedule = True |
(Boolean) Allow auto scheduling of routers to L3 agent. |
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler |
(String) Driver to use for scheduling router to a default L3 agent |
Use the following options in the configuration file for your driver to change security group settings.
Configuration option = Default value | Description |
---|---|
[SECURITYGROUP] | |
enable_ipset = True |
(Boolean) Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. |
enable_security_group = True |
(Boolean) Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. |
firewall_driver = None |
(String) Driver for security groups firewall in the L2 agent |
Note
Now Networking uses iptables to achieve security group functions.
In L2 agent with enable_ipset
option enabled, it makes use of
IPset to improve security group’s performance, as it represents a
hash set which is insensitive to the number of elements.
When a port is created, L2 agent will add an additional IPset chain to it’s iptables chain, if the security group that this port belongs to has rules between other security group, the member of that security group will be added to the ipset chain.
If a member of a security group is changed, it used to reload iptables rules which is expensive. However, when IPset option is enabled on L2 agent, it does not need to reload iptables if only members of security group were changed, it should just update an IPset.
Note
A single default security group has been introduced in order to
avoid race conditions when creating a tenant’s default security
group. The race conditions are caused by the uniqueness check of a
new security group name. A table default_security_group
implements such a group. It has tenant_id
field as a primary
key and security_group_id
, which is an identifier of a default
security group. The migration that introduces this table has a
sanity check that verifies if a default security group is not
duplicated in any tenant.
Configuration option = Default value | Description |
---|---|
[FDB] | |
shared_physical_device_mappings = |
(List) Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent’s node-specific shared physical network device between SR-IOV and OVS or SR-IOV and linux bridge |
Configuration option = Default value | Description |
---|---|
[QOS] | |
kernel_hz = 250 |
(Integer) Value of host kernel tick rate (hz) for calculating minimum burst value in bandwidth limit rules for a port with QoS. See kernel configuration file for HZ value and tc-tbf manual for more information. |
tbf_latency = 50 |
(Integer) Value of latency (ms) for calculating size of queue for a port with QoS. See tc-tbf manual for more information. |
Use the following options in the fwaas_driver.ini
file for the FWaaS driver.
Configuration option = Default value | Description |
---|---|
[fwaas] | |
agent_version = v1 |
(String) Firewall agent class |
driver = |
(String) Name of the FWaaS Driver |
enabled = False |
(Boolean) Enable FWaaS |
Use the following options in the neutron_lbaas.conf
file for the
LBaaS agent.
Note
The common configurations for shared services and libraries, such as database connections and RPC messaging, are described at Common configurations.
Configuration option = Default value | Description |
---|---|
[certificates] | |
barbican_auth = barbican_acl_auth |
(String) Name of the Barbican authentication method to use |
cert_manager_type = barbican |
(String) Certificate Manager plugin. Defaults to barbican. |
storage_path = /var/lib/neutron-lbaas/certificates/ |
(String) Absolute path to the certificate storage directory. Defaults to env[OS_LBAAS_TLS_STORAGE]. |
Use the following options in the lbaas_agent.ini
file for the
LBaaS agent.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
debug = False |
(Boolean) If set to true, the logging level will be set to DEBUG instead of the default INFO level. Mutable This option can be changed without restarting. |
device_driver = ['neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver'] |
(Multi-valued) Drivers used to manage loadbalancing devices |
interface_driver = None |
(String) The driver used to manage the virtual interface. |
periodic_interval = 40 |
(Integer) Seconds between running periodic tasks. |
[haproxy] | |
loadbalancer_state_path = $state_path/lbaas |
(String) Location to store config and state files |
send_gratuitous_arp = 3 |
(Integer) When delete and re-add the same vip, send this many gratuitous ARPs to flush the ARP cache in the Router. Set it below or equal to 0 to disable this feature. |
user_group = nogroup |
(String) The user group |
Use the following options in the services_lbaas.conf
file for the
LBaaS agent.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
loadbalancer_scheduler_driver = neutron_lbaas.agent_scheduler.ChanceScheduler |
(String) Driver to use for scheduling to a default loadbalancer agent |
[haproxy] | |
jinja_config_template = /usr/lib/python/site-packages/neutron-lbaas/neutron_lbaas/drivers/haproxy/templates/haproxy.loadbalancer.j2 |
(String) Jinja template file for haproxy configuration |
[octavia] | |
allocates_vip = False |
(Boolean) True if Octavia will be responsible for allocating the VIP. False if neutron-lbaas will allocate it and pass to Octavia. |
base_url = http://127.0.0.1:9876 |
(String) URL of Octavia controller root |
request_poll_interval = 3 |
(Integer) Interval in seconds to poll octavia when an entity is created, updated, or deleted. |
request_poll_timeout = 100 |
(Integer) Time to stop polling octavia when a status of an entity does not change. |
[radwarev2] | |
child_workflow_template_names = manage_l3 |
(List) Name of child workflow templates used.Default: manage_l3 |
ha_secondary_address = None |
(String) IP address of secondary vDirect server. |
service_adc_type = VA |
(String) Service ADC type. Default: VA. |
service_adc_version = |
(String) Service ADC version. |
service_cache = 20 |
(Integer) Size of service cache. Default: 20. |
service_compression_throughput = 100 |
(Integer) Service compression throughput. Default: 100. |
service_ha_pair = False |
(Boolean) Enables or disables the Service HA pair. Default: False. |
service_isl_vlan = -1 |
(Integer) A required VLAN for the interswitch link to use. |
service_resource_pool_ids = |
(List) Resource pool IDs. |
service_session_mirroring_enabled = False |
(Boolean) Enable or disable Alteon interswitch link for stateful session failover. Default: False. |
service_ssl_throughput = 100 |
(Integer) Service SSL throughput. Default: 100. |
service_throughput = 1000 |
(Integer) Service throughput. Default: 1000. |
stats_action_name = stats |
(String) Name of the workflow action for statistics. Default: stats. |
vdirect_address = None |
(String) IP address of vDirect server. |
vdirect_password = radware |
(String) vDirect user password. |
vdirect_user = vDirect |
(String) vDirect user name. |
workflow_action_name = apply |
(String) Name of the workflow action. Default: apply. |
workflow_params = {'data_ip_address': '192.168.200.99', 'ha_network_name': 'HA-Network', 'ha_port': 2, 'allocate_ha_ips': True, 'ha_ip_pool_name': 'default', 'allocate_ha_vrrp': True, 'data_port': 1, 'gateway': '192.168.200.1', 'twoleg_enabled': '_REPLACE_', 'data_ip_mask': '255.255.255.0'} |
(Dict) Parameter for l2_l3 workflow constructor. |
workflow_template_name = os_lb_v2 |
(String) Name of the workflow template. Default: os_lb_v2. |
[radwarev2_debug] | |
configure_l3 = True |
(Boolean) Configule ADC with L3 parameters? |
configure_l4 = True |
(Boolean) Configule ADC with L4 parameters? |
provision_service = True |
(Boolean) Provision ADC service? |
Octavia is an operator-grade open source load balancing solution.
Use the following options in the /etc/octavia/octavia.conf
file
to configure the octavia service.
Configuration option = Default value | Description |
---|---|
[keystone_authtoken_v3] | |
admin_project_domain = default |
(String) Admin project keystone authentication domain |
admin_user_domain = default |
(String) Admin user keystone authentication domain |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
allow_bulk = True |
(Boolean) Allow the usage of the bulk API |
allow_pagination = False |
(Boolean) Allow the usage of the pagination |
allow_sorting = False |
(Boolean) Allow the usage of the sorting |
api_extensions_path = |
(String) The path for API extensions |
api_handler = queue_producer |
(String) The handler that the API communicates with |
api_paste_config = api-paste.ini |
(String) The API paste config file to use |
auth_strategy = keystone |
(String) The type of authentication to use |
bind_host = 127.0.0.1 |
(IP) The host IP to bind to |
bind_port = 9876 |
(Port number) The port to bind to |
control_exchange = octavia |
(String) The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
host = localhost |
(String) The hostname Octavia is running on |
octavia_plugins = hot_plug_plugin |
(String) Name of the controller plugin to use |
pagination_max_limit = -1 |
(String) The maximum number of items returned in a single response. The string ‘infinite’ or a negative integer value means ‘no limit’ |
[amphora_agent] | |
agent_server_ca = /etc/octavia/certs/client_ca.pem |
(String) The ca which signed the client certificates |
agent_server_cert = /etc/octavia/certs/server.pem |
(String) The server certificate for the agent.py server to use |
agent_server_network_dir = /etc/netns/amphora-haproxy/network/interfaces.d/ |
(String) The directory where new network interfaces are located |
agent_server_network_file = None |
(String) The file where the network interfaces are located. Specifying this will override any value set for agent_server_network_dir. |
amphora_id = None |
(String) The amphora ID. |
[anchor] | |
password = None |
(String) Anchor password |
url = http://localhost:9999/v1/sign/default |
(String) Anchor URL |
username = None |
(String) Anchor username |
[certificates] | |
barbican_auth = barbican_acl_auth |
(String) Name of the Barbican authentication method to use |
ca_certificate = /etc/ssl/certs/ssl-cert-snakeoil.pem |
(String) Absolute path to the CA Certificate for signing. Defaults to env[OS_OCTAVIA_TLS_CA_CERT]. |
ca_private_key = /etc/ssl/private/ssl-cert-snakeoil.key |
(String) Absolute path to the Private Key for signing. Defaults to env[OS_OCTAVIA_TLS_CA_KEY]. |
ca_private_key_passphrase = None |
(String) Passphrase for the Private Key. Defaults to env[OS_OCTAVIA_CA_KEY_PASS] or None. |
cert_generator = local_cert_generator |
(String) Name of the cert generator to use |
cert_manager = barbican_cert_manager |
(String) Name of the cert manager to use |
endpoint_type = publicURL |
(String) The endpoint_type to be used for barbican service. |
region_name = None |
(String) Region in Identity service catalog to use for communication with the barbican service. |
signing_digest = sha256 |
(String) Certificate signing digest. Defaults to env[OS_OCTAVIA_CA_SIGNING_DIGEST] or “sha256”. |
storage_path = /var/lib/octavia/certificates/ |
(String) Absolute path to the certificate storage directory. Defaults to env[OS_OCTAVIA_TLS_STORAGE]. |
[controller_worker] | |
amp_active_retries = 10 |
(Integer) Retry attempts to wait for Amphora to become active |
amp_active_wait_sec = 10 |
(Integer) Seconds to wait between checks on whether an Amphora has become active |
amp_boot_network_list = |
(List) List of networks to attach to the Amphorae. All networks defined in the list will be attached to each amphora. |
amp_flavor_id = |
(String) Nova instance flavor id for the Amphora |
amp_image_id = |
(String) DEPRECATED: Glance image id for the Amphora image to boot Superseded by amp_image_tag option. |
amp_image_owner_id = |
(String) Restrict glance image selection to a specific owner ID. This is a recommended security setting. |
amp_image_tag = |
(String) Glance image tag for the Amphora image to boot. Use this option to be able to update the image without reconfiguring Octavia. Ignored if amp_image_id is defined. |
amp_network = |
(String) DEPRECATED: Network to attach to the Amphorae. Replaced by amp_boot_network_list. |
amp_secgroup_list = |
(List) List of security groups to attach to the Amphora. |
amp_ssh_access_allowed = True |
(Boolean) Determines whether or not to allow access to the Amphorae |
amp_ssh_key_name = |
(String) SSH key name used to boot the Amphora |
amphora_driver = amphora_noop_driver |
(String) Name of the amphora driver to use |
cert_generator = local_cert_generator |
(String) Name of the cert generator to use |
client_ca = /etc/octavia/certs/ca_01.pem |
(String) Client CA for the amphora agent to use |
compute_driver = compute_noop_driver |
(String) Name of the compute driver to use |
loadbalancer_topology = SINGLE |
(String) Load balancer topology configuration. SINGLE - One amphora per load balancer. ACTIVE_STANDBY - Two amphora per load balancer. |
network_driver = network_noop_driver |
(String) Name of the network driver to use |
user_data_config_drive = False |
(Boolean) If True, build cloud-init user-data that is passed to the config drive on Amphora boot instead of personality files. If False, utilize personality files. |
[glance] | |
ca_certificates_file = None |
(String) CA certificates file path |
endpoint = None |
(String) A new endpoint to override the endpoint in the keystone catalog. |
endpoint_type = publicURL |
(String) Endpoint interface in identity service to use |
insecure = False |
(Boolean) Disable certificate validation on SSL connections |
region_name = None |
(String) Region in Identity service catalog to use for communication with the OpenStack services. |
service_name = None |
(String) The name of the glance service in the keystone catalog |
[haproxy_amphora] | |
base_cert_dir = /var/lib/octavia/certs |
(String) Base directory for cert storage. |
base_path = /var/lib/octavia |
(String) Base directory for amphora files. |
bind_host = 0.0.0.0 |
(IP) The host IP to bind to |
bind_port = 9443 |
(Port number) The port to bind to |
client_cert = /etc/octavia/certs/client.pem |
(String) The client certificate to talk to the agent |
connection_max_retries = 300 |
(Integer) Retry threshold for connecting to amphorae. |
connection_retry_interval = 5 |
(Integer) Retry timeout between connection attempts in seconds. |
haproxy_cmd = /usr/sbin/haproxy |
(String) The full path to haproxy |
haproxy_stick_size = 10k |
(String) Size of the HAProxy stick table. Accepts k, m, g suffixes. Example: 10k |
haproxy_template = None |
(String) Custom haproxy template. |
respawn_count = 2 |
(Integer) The respawn count for haproxy’s upstart script |
respawn_interval = 2 |
(Integer) The respawn interval for haproxy’s upstart script |
rest_request_conn_timeout = 10 |
(Floating point) The time in seconds to wait for a REST API to connect. |
rest_request_read_timeout = 60 |
(Floating point) The time in seconds to wait for a REST API response. |
server_ca = /etc/octavia/certs/server_ca.pem |
(String) The ca which signed the server certificates |
use_upstart = True |
(Boolean) If False, use sysvinit. |
[health_manager] | |
bind_ip = 127.0.0.1 |
(IP) IP address the controller will listen on for heart beats |
bind_port = 5555 |
(Port number) Port number the controller will listen onfor heart beats |
controller_ip_port_list = |
(List) List of controller ip and port pairs for the heartbeat receivers. Example 127.0.0.1:5555, 192.168.0.1:5555 |
event_streamer_driver = noop_event_streamer |
(String) Specifies which driver to use for the event_streamer for syncing the octavia and neutron_lbaas dbs. If you don’t need to sync the database or are running octavia in stand alone mode use the noop_event_streamer |
failover_threads = 10 |
(Integer) Number of threads performing amphora failovers. |
health_check_interval = 3 |
(Integer) Sleep time between health checks in seconds. |
heartbeat_interval = 10 |
(Integer) Sleep time between sending heartbeats. |
heartbeat_key = None |
(String) key used to validate amphora sendingthe message |
heartbeat_timeout = 60 |
(Integer) Interval, in seconds, to wait before failing over an amphora. |
sock_rlimit = 0 |
(Integer) sets the value of the heartbeat recv buffer |
status_update_threads = 50 |
(Integer) Number of threads performing amphora status update. |
[house_keeping] | |
amphora_expiry_age = 604800 |
(Integer) Amphora expiry age in seconds |
cert_expiry_buffer = 1209600 |
(Integer) Seconds until certificate expiration |
cert_interval = 3600 |
(Integer) Certificate check interval in seconds |
cert_rotate_threads = 10 |
(Integer) Number of threads performing amphora certificate rotation |
cleanup_interval = 30 |
(Integer) DB cleanup interval in seconds |
load_balancer_expiry_age = 604800 |
(Integer) Load balancer expiry age in seconds |
spare_amphora_pool_size = 0 |
(Integer) Number of spare amphorae |
spare_check_interval = 30 |
(Integer) Spare check interval in seconds |
[keepalived_vrrp] | |
vrrp_advert_int = 1 |
(Integer) Amphora role and priority advertisement interval in seconds. |
vrrp_check_interval = 5 |
(Integer) VRRP health check script run interval in seconds. |
vrrp_fail_count = 2 |
(Integer) Number of successive failures before transition to a fail state. |
vrrp_garp_refresh_count = 2 |
(Integer) Number of gratuitous ARP announcements to make on each refresh interval. |
vrrp_garp_refresh_interval = 5 |
(Integer) Time in seconds between gratuitous ARP announcements from the MASTER. |
vrrp_success_count = 2 |
(Integer) Number of consecutive successes before transition to a success state. |
[networking] | |
lb_network_name = None |
(String) Name of amphora internal network |
max_retries = 15 |
(Integer) The maximum attempts to retry an action with the networking service. |
port_detach_timeout = 300 |
(Integer) Seconds to wait for a port to detach from an amphora. |
retry_interval = 1 |
(Integer) Seconds to wait before retrying an action with the networking service. |
[neutron] | |
ca_certificates_file = None |
(String) CA certificates file path |
endpoint = None |
(String) A new endpoint to override the endpoint in the keystone catalog. |
endpoint_type = publicURL |
(String) Endpoint interface in identity service to use |
insecure = False |
(Boolean) Disable certificate validation on SSL connections |
region_name = None |
(String) Region in Identity service catalog to use for communication with the OpenStack services. |
service_name = None |
(String) The name of the neutron service in the keystone catalog |
[nova] | |
ca_certificates_file = None |
(String) CA certificates file path |
enable_anti_affinity = False |
(Boolean) Flag to indicate if nova anti-affinity feature is turned on. |
endpoint = None |
(String) A new endpoint to override the endpoint in the keystone catalog. |
endpoint_type = publicURL |
(String) Endpoint interface in identity service to use |
insecure = False |
(Boolean) Disable certificate validation on SSL connections |
region_name = None |
(String) Region in Identity service catalog to use for communication with the OpenStack services. |
service_name = None |
(String) The name of the nova service in the keystone catalog |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[task_flow] | |
engine = serial |
(String) TaskFlow engine to use |
max_workers = 5 |
(Integer) The maximum number of workers |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Use the following options in the vpnaas_agent.ini
file for the
VPNaaS agent.
Configuration option = Default value | Description |
---|---|
[vpnagent] | |
vpn_device_driver = ['neutron_vpnaas.services.vpn.device_drivers.ipsec.OpenSwanDriver, neutron_vpnaas.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver, neutron_vpnaas.services.vpn.device_drivers.vyatta_ipsec.VyattaIPSecDriver, neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.fedora_strongswan_ipsec.FedoraStrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver'] |
(Multi-valued) The vpn device drivers Neutron will use |
Configuration option = Default value | Description |
---|---|
[cisco_csr_ipsec] | |
status_check_interval = 60 |
(Integer) Status check interval for Cisco CSR IPSec connections |
[ipsec] | |
config_base_dir = $state_path/ipsec |
(String) Location to store ipsec server config files |
enable_detailed_logging = False |
(Boolean) Enable detail logging for ipsec pluto process. If the flag set to True, the detailed logging will be written into config_base_dir/<pid>/log. Note: This setting applies to OpenSwan and LibreSwan only. StrongSwan logs to syslog. |
ipsec_status_check_interval = 60 |
(Integer) Interval for checking ipsec status |
[pluto] | |
restart_check_config = False |
(Boolean) Enable this flag to avoid from unnecessary restart |
shutdown_check_back_off = 1.5 |
(Floating point) A factor to increase the retry interval for each retry |
shutdown_check_retries = 5 |
(Integer) The maximum number of retries for checking for pluto daemon shutdown |
shutdown_check_timeout = 1 |
(Integer) Initial interval in seconds for checking if pluto daemon is shutdown |
Configuration option = Default value | Description |
---|---|
[openswan] | |
ipsec_config_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/openswan/ipsec.conf.template |
(String) Template file for ipsec configuration |
ipsec_secret_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/openswan/ipsec.secret.template |
(String) Template file for ipsec secret configuration |
Configuration option = Default value | Description |
---|---|
[strongswan] | |
default_config_area = /etc/strongswan.d |
(String) The area where default StrongSwan configuration files are located. |
ipsec_config_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.conf.template |
(String) Template file for ipsec configuration. |
ipsec_secret_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.secret.template |
(String) Template file for ipsec secret configuration. |
strongswan_config_template = /usr/lib/python/site-packages/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/strongswan.conf.template |
(String) Template file for strongswan configuration. |
Note
strongSwan
and Openswan
cannot both be installed and enabled at the
same time. The vpn_device_driver
configuration option in the
vpnaas_agent.ini
file is an option that lists the VPN device
drivers that the Networking service will use. You must choose either
strongSwan
or Openswan
as part of the list.
Important
Ensure that your strongSwan
version is 5 or newer.
To declare either one in the vpn_device_driver
:
#Openswan
vpn_device_driver = ['neutron_vpnaas.services.vpn.device_drivers.ipsec.OpenSwanDriver']
#strongSwan
vpn_device_driver = ['neutron.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver']
The corresponding log file of each Networking service is stored in the
/var/log/neutron/
directory of the host on which each service
runs.
Log file | Service/interface |
---|---|
dhcp-agent.log |
neutron-dhcp-agent |
l3-agent.log |
neutron-l3-agent |
lbaas-agent.log |
neutron-lbaas-agent [1] |
linuxbridge-agent.log |
neutron-linuxbridge-agent |
metadata-agent.log |
neutron-metadata-agent |
metering-agent.log |
neutron-metering-agent |
openvswitch-agent.log |
neutron-openvswitch-agent |
server.log |
neutron-server |
[1] | The neutron-lbaas-agent service only runs when
Load-Balancer-as-a-Service is enabled. |
The Networking service implements automatic generation of configuration
files. This guide contains a snapshot of common configuration files for
convenience. However, consider generating the latest configuration files
by cloning the neutron repository and running the
tools/generate_config_file_samples.sh
script. Distribution packages
should include sample configuration files for a particular release.
Generally, these files reside in the /etc/neutron
directory structure.
The neutron.conf
file contains the majority of Networking service
options common to all components.
[DEFAULT]
#
# From neutron
#
# Where to store Neutron state files. This directory must be writable by the
# agent. (string value)
#state_path = /var/lib/neutron
# The host IP to bind to (string value)
#bind_host = 0.0.0.0
# The port to bind to (port value)
# Minimum value: 0
# Maximum value: 65535
#bind_port = 9696
# The path for API extensions. Note that this can be a colon-separated list of
# paths. For example: api_extensions_path =
# extensions:/path/to/more/exts:/even/more/exts. The __path__ of
# neutron.extensions is appended to this, so if your extensions are in there
# you don't need to specify them here. (string value)
#api_extensions_path =
# The type of authentication to use (string value)
#auth_strategy = keystone
# The core plugin Neutron will use (string value)
#core_plugin = <None>
# The service plugins Neutron will use (list value)
#service_plugins =
# The base MAC address Neutron will use for VIFs. The first 3 octets will
# remain unchanged. If the 4th octet is not 00, it will also be used. The
# others will be randomly generated. (string value)
#base_mac = fa:16:3e:00:00:00
# DEPRECATED: How many times Neutron will retry MAC generation. This option is
# now obsolete and so is deprecated to be removed in the Ocata release.
# (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#mac_generation_retries = 16
# Allow the usage of the bulk API (boolean value)
#allow_bulk = true
# DEPRECATED: Allow the usage of the pagination. This option has been
# deprecated and will now be enabled unconditionally. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#allow_pagination = true
# DEPRECATED: Allow the usage of the sorting. This option has been deprecated
# and will now be enabled unconditionally. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#allow_sorting = true
# The maximum number of items returned in a single response, value was
# 'infinite' or negative integer means no limit (string value)
#pagination_max_limit = -1
# Default value of availability zone hints. The availability zone aware
# schedulers use this when the resources availability_zone_hints is empty.
# Multiple availability zones can be specified by a comma separated string.
# This value can be empty. In this case, even if availability_zone_hints for a
# resource is empty, availability zone is considered for high availability
# while scheduling the resource. (list value)
#default_availability_zones =
# Maximum number of DNS nameservers per subnet (integer value)
#max_dns_nameservers = 5
# Maximum number of host routes per subnet (integer value)
#max_subnet_host_routes = 20
# DEPRECATED: Maximum number of fixed ips per port. This option is deprecated
# and will be removed in the N release. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#max_fixed_ips_per_port = 5
# Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to
# True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable
# environment. Users making subnet creation requests for IPv6 subnets without
# providing a CIDR or subnetpool ID will be given a CIDR via the Prefix
# Delegation mechanism. Note that enabling PD will override the behavior of the
# default IPv6 subnetpool. (boolean value)
#ipv6_pd_enabled = false
# DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite
# lease times. (integer value)
# Deprecated group/name - [DEFAULT]/dhcp_lease_time
#dhcp_lease_duration = 86400
# Domain to use for building the hostnames (string value)
#dns_domain = openstacklocal
# Driver for external DNS integration. (string value)
#external_dns_driver = <None>
# Allow sending resource operation notification to DHCP agent (boolean value)
#dhcp_agent_notification = true
# Allow overlapping IP support in Neutron. Attention: the following parameter
# MUST be set to False if Neutron is being used in conjunction with Nova
# security groups. (boolean value)
#allow_overlapping_ips = false
# Hostname to be used by the Neutron server, agents and services running on
# this machine. All the agents and services running on this machine must use
# the same host value. (string value)
#host = example.domain
# Send notification to nova when port status changes (boolean value)
#notify_nova_on_port_status_changes = true
# Send notification to nova when port data (fixed_ips/floatingip) changes so
# nova can update its cache. (boolean value)
#notify_nova_on_port_data_changes = true
# Number of seconds between sending events to nova if there are any events to
# send. (integer value)
#send_events_interval = 2
# DEPRECATED: If True, advertise network MTU values if core plugin calculates
# them. MTU is advertised to running instances via DHCP and RA MTU options.
# (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#advertise_mtu = true
# Neutron IPAM (IP address management) driver to use. By default, the reference
# implementation of the Neutron IPAM driver is used. (string value)
#ipam_driver = internal
# If True, then allow plugins that support it to create VLAN transparent
# networks. (boolean value)
#vlan_transparent = false
# This will choose the web framework in which to run the Neutron API server.
# 'pecan' is a new experimental rewrite of the API server. (string value)
# Allowed values: legacy, pecan
#web_framework = legacy
# MTU of the underlying physical network. Neutron uses this value to calculate
# MTU for all virtual network components. For flat and VLAN networks, neutron
# uses this value without modification. For overlay networks such as VXLAN,
# neutron automatically subtracts the overlay protocol overhead from this
# value. Defaults to 1500, the standard value for Ethernet. (integer value)
# Deprecated group/name - [ml2]/segment_mtu
#global_physnet_mtu = 1500
# Number of backlog requests to configure the socket with (integer value)
#backlog = 4096
# Number of seconds to keep retrying to listen (integer value)
#retry_until_window = 30
# Enable SSL on the API server (boolean value)
#use_ssl = false
# Seconds between running periodic tasks. (integer value)
#periodic_interval = 40
# Number of separate API worker processes for service. If not specified, the
# default is equal to the number of CPUs available for best performance.
# (integer value)
#api_workers = <None>
# Number of RPC worker processes for service. (integer value)
#rpc_workers = 1
# Number of RPC worker processes dedicated to state reports queue. (integer
# value)
#rpc_state_report_workers = 1
# Range of seconds to randomly delay when starting the periodic task scheduler
# to reduce stampeding. (Disable by setting to 0) (integer value)
#periodic_fuzzy_delay = 5
#
# From neutron.agent
#
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
# Location for Metadata Proxy UNIX domain socket. (string value)
#metadata_proxy_socket = $state_path/metadata_proxy
# User (uid or name) running metadata proxy after its initialization (if empty:
# agent effective user). (string value)
#metadata_proxy_user =
# Group (gid or name) running metadata proxy after its initialization (if
# empty: agent effective group). (string value)
#metadata_proxy_group =
# Enable/Disable log watch by metadata proxy. It should be disabled when
# metadata_proxy_user/group is not allowed to read/write its log file and
# copytruncate logrotate option must be used if logrotate is enabled on
# metadata proxy log files. Option default value is deduced from
# metadata_proxy_user: watch log is enabled if metadata_proxy_user is agent
# effective user id/name. (boolean value)
#metadata_proxy_watch_log = <None>
#
# From neutron.db
#
# Seconds to regard the agent is down; should be at least twice
# report_interval, to be sure the agent is down for good. (integer value)
#agent_down_time = 75
# Representing the resource type whose load is being reported by the agent.
# This can be "networks", "subnets" or "ports". When specified (Default is
# networks), the server will extract particular load sent as part of its agent
# configuration object from the agent report state, which is the number of
# resources being consumed, at every report_interval.dhcp_load_type can be used
# in combination with network_scheduler_driver =
# neutron.scheduler.dhcp_agent_scheduler.WeightScheduler When the
# network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured
# to represent the choice for the resource being balanced. Example:
# dhcp_load_type=networks (string value)
# Allowed values: networks, subnets, ports
#dhcp_load_type = networks
# Agent starts with admin_state_up=False when enable_new_agents=False. In the
# case, user's resources will not be scheduled automatically to the agent until
# admin changes admin_state_up to True. (boolean value)
#enable_new_agents = true
# Maximum number of routes per router (integer value)
#max_routes = 30
# Define the default value of enable_snat if not provided in
# external_gateway_info. (boolean value)
#enable_snat_by_default = true
# Driver to use for scheduling network to DHCP agent (string value)
#network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler
# Allow auto scheduling networks to DHCP agent. (boolean value)
#network_auto_schedule = true
# Automatically remove networks from offline DHCP agents. (boolean value)
#allow_automatic_dhcp_failover = true
# Number of DHCP agents scheduled to host a tenant network. If this number is
# greater than 1, the scheduler automatically assigns multiple DHCP agents for
# a given tenant network, providing high availability for DHCP service.
# (integer value)
#dhcp_agents_per_network = 1
# Enable services on an agent with admin_state_up False. If this option is
# False, when admin_state_up of an agent is turned False, services on it will
# be disabled. Agents with admin_state_up False are not selected for automatic
# scheduling regardless of this option. But manual scheduling to such agents is
# available if this option is True. (boolean value)
#enable_services_on_agents_with_admin_state_down = false
# The base mac address used for unique DVR instances by Neutron. The first 3
# octets will remain unchanged. If the 4th octet is not 00, it will also be
# used. The others will be randomly generated. The 'dvr_base_mac' *must* be
# different from 'base_mac' to avoid mixing them up with MAC's allocated for
# tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00.
# The default is 3 octet (string value)
#dvr_base_mac = fa:16:3f:00:00:00
# System-wide flag to determine the type of router that tenants can create.
# Only admin can override. (boolean value)
#router_distributed = false
# Driver to use for scheduling router to a default L3 agent (string value)
#router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
# Allow auto scheduling of routers to L3 agent. (boolean value)
#router_auto_schedule = true
# Automatically reschedule routers from offline L3 agents to online L3 agents.
# (boolean value)
#allow_automatic_l3agent_failover = false
# Enable HA mode for virtual routers. (boolean value)
#l3_ha = false
# Maximum number of L3 agents which a HA router will be scheduled on. If it is
# set to 0 then the router will be scheduled on every agent. (integer value)
#max_l3_agents_per_router = 3
# DEPRECATED: Minimum number of L3 agents that have to be available in order to
# allow a new HA router to be scheduled. This option is deprecated in the
# Newton release and will be removed for the Ocata release where the scheduling
# of new HA routers will always be allowed. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#min_l3_agents_per_router = 2
# Subnet used for the l3 HA admin network. (string value)
#l3_ha_net_cidr = 169.254.192.0/18
# The network type to use when creating the HA network for an HA router. By
# default or if empty, the first 'tenant_network_types' is used. This is
# helpful when the VRRP traffic should use a specific network which is not the
# default one. (string value)
#l3_ha_network_type =
# The physical network name with which the HA network can be created. (string
# value)
#l3_ha_network_physical_name =
#
# From neutron.extensions
#
# Maximum number of allowed address pairs (integer value)
#max_allowed_address_pair = 10
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
#
# From oslo.messaging
#
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2
# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>
# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = neutron
#
# From oslo.service.wsgi
#
# File name for the paste.deploy config for api service (string value)
#api_paste_config = api-paste.ini
# A python format string that is used as the template to generate log lines.
# The following values can beformatted into it: client_ip, date_time,
# request_line, status_code, body_length, wall_seconds. (string value)
#wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f
# Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not
# supported on OS X. (integer value)
#tcp_keepidle = 600
# Size of the pool of greenthreads used by wsgi (integer value)
#wsgi_default_pool_size = 100
# Maximum line size of message headers to be accepted. max_header_line may need
# to be increased when using large tokens (typically those generated when
# keystone is configured to use PKI tokens with big service catalogs). (integer
# value)
#max_header_line = 16384
# If False, closes the client socket connection explicitly. (boolean value)
#wsgi_keep_alive = true
# Timeout for client connections' socket operations. If an incoming connection
# is idle for this number of seconds it will be closed. A value of '0' means
# wait forever. (integer value)
#client_socket_timeout = 900
[agent]
#
# From neutron.agent
#
# Root helper application. Use 'sudo neutron-rootwrap
# /etc/neutron/rootwrap.conf' to use the real root filter facility. Change to
# 'sudo' to skip the filtering and just run the command directly. (string
# value)
#root_helper = sudo
# Use the root helper when listing the namespaces on a system. This may not be
# required depending on the security configuration. If the root helper is not
# required, set this to False for a performance improvement. (boolean value)
#use_helper_for_ns_read = true
# Root helper daemon application to use when possible. (string value)
#root_helper_daemon = <None>
# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false
# Add comments to iptables rules. Set to false to disallow the addition of
# comments to generated iptables rules that describe each rule's purpose.
# System must support the iptables comments module for addition of comments.
# (boolean value)
#comment_iptables_rules = true
# Duplicate every iptables difference calculation to ensure the format being
# generated matches the format of iptables-save. This option should not be
# turned on for production systems because it imposes a performance penalty.
# (boolean value)
#debug_iptables_rules = false
# Action to be executed when a child process dies (string value)
# Allowed values: respawn, exit
#check_child_processes_action = respawn
# Interval between checks of child process liveness (seconds), use 0 to disable
# (integer value)
#check_child_processes_interval = 60
# Availability zone of this node (string value)
#availability_zone = nova
[cors]
#
# From oslo.middleware.cors
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
[cors.subdomain]
#
# From oslo.middleware.cors
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-OpenStack-Request-ID,OpenStack-Volume-microversion
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-OpenStack-Request-ID
[database]
#
# From neutron.db
#
# Database engine for which script will be generated when using offline
# migration. (string value)
#engine =
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should ideally be unversioned, client support in the
# wild varies. If you're using a versioned v2 endpoint here, then this should
# *not* be the same endpoint the service user utilizes for validating tokens,
# because normal end users may not be able to reach that endpoint. (string
# value)
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3
# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance. Only
# valid for PKI tokens. (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive
# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false
# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
[matchmaker_redis]
#
# From oslo.messaging
#
# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1
# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379
# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =
# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000
[nova]
#
# From neutron
#
# Name of nova region to use. Useful if keystone manages more than one region.
# (string value)
#region_name = <None>
# Type of the nova endpoint to use. This endpoint will be looked up in the
# keystone catalog and should be one of public, internal or admin. (string
# value)
# Allowed values: public, admin, internal
#endpoint_type = public
#
# From nova.auth
#
# Authentication URL (string value)
#auth_url = <None>
# Authentication type to load (string value)
# Deprecated group/name - [nova]/auth_plugin
#auth_type = <None>
# PEM encoded Certificate Authority to use when verifying HTTPs connections.
# (string value)
#cafile = <None>
# PEM encoded client certificate cert file (string value)
#certfile = <None>
# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>
# Optional domain name to use with v3 API and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>
# Domain ID to scope to (string value)
#domain_id = <None>
# Domain name to scope to (string value)
#domain_name = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# PEM encoded client certificate key file (string value)
#keyfile = <None>
# User's password (string value)
#password = <None>
# Domain ID containing project (string value)
#project_domain_id = <None>
# Domain name containing project (string value)
#project_domain_name = <None>
# Project ID to scope to (string value)
# Deprecated group/name - [nova]/tenant-id
#project_id = <None>
# Project name to scope to (string value)
# Deprecated group/name - [nova]/tenant-name
#project_name = <None>
# Tenant ID (string value)
#tenant_id = <None>
# Tenant Name (string value)
#tenant_name = <None>
# Timeout value for http requests (integer value)
#timeout = <None>
# Trust ID (string value)
#trust_id = <None>
# User's domain id (string value)
#user_domain_id = <None>
# User's domain name (string value)
#user_domain_name = <None>
# User id (string value)
#user_id = <None>
# Username (string value)
# Deprecated group/name - [nova]/user-name
#username = <None>
[oslo_concurrency]
#
# From oslo.concurrency
#
# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false
# Directory to use for lock files. For security, the specified directory
# should only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1
# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2
# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30
# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10
# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30
# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30
# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30
# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy' - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic' - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc
# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify
# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast
# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast
# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast
# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>
# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>
# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200
# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100
# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100
[oslo_messaging_notifications]
#
# From oslo.messaging
#
# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>
# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost
# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672
# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest
# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30
# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0
# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800
# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0
# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
# Maximum number of channels to allow (integer value)
#channel_max = <None>
# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>
# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3
# Enable SSL (boolean value)
#ssl = <None>
# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>
# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25
# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25
# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25
# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single
# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30
# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0
# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30
# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600
# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60
# Persist notification messages. (boolean value)
#notification_persistence = false
# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification
# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25
# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60
# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc
# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply
# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100
# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25
# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25
[oslo_messaging_zmq]
#
# From oslo.messaging
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
[qos]
#
# From neutron.qos
#
# Drivers list to use to send the update notification (list value)
#notification_drivers = message_queue
[quotas]
#
# From neutron
#
# Default number of resource allowed per tenant. A negative value means
# unlimited. (integer value)
#default_quota = -1
# Number of networks allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_network = 10
# Number of subnets allowed per tenant, A negative value means unlimited.
# (integer value)
#quota_subnet = 10
# Number of ports allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_port = 50
# Default driver to use for quota checks. (string value)
#quota_driver = neutron.db.quota.driver.DbQuotaDriver
# Keep in track in the database of current resource quota usage. Plugins which
# do not leverage the neutron database should set this flag to False. (boolean
# value)
#track_quota_usage = true
#
# From neutron.extensions
#
# Number of routers allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_router = 10
# Number of floating IPs allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_floatingip = 50
# Number of security groups allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_security_group = 10
# Number of security rules allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_security_group_rule = 100
[ssl]
#
# From oslo.service.sslutils
#
# CA certificate file to use to verify connecting clients. (string value)
# Deprecated group/name - [DEFAULT]/ssl_ca_file
#ca_file = <None>
# Certificate file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_cert_file
#cert_file = <None>
# Private key file to use when starting the server securely. (string value)
# Deprecated group/name - [DEFAULT]/ssl_key_file
#key_file = <None>
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
#version = <None>
# Sets the list of available ciphers. value should be a string in the OpenSSL
# cipher list format. (string value)
#ciphers = <None>
The api-paste.ini
file contains configuration for the web services
gateway interface (WSGI).
[composite:neutron]
use = egg:Paste#urlmap
/: neutronversions_composite
/v2.0: neutronapi_v2_0
[composite:neutronapi_v2_0]
use = call:neutron.auth:pipeline_factory
noauth = cors request_id catch_errors extensions neutronapiapp_v2_0
keystone = cors request_id catch_errors authtoken keystonecontext extensions neutronapiapp_v2_0
[composite:neutronversions_composite]
use = call:neutron.auth:pipeline_factory
noauth = cors neutronversions
keystone = cors neutronversions
[filter:request_id]
paste.filter_factory = oslo_middleware:RequestId.factory
[filter:catch_errors]
paste.filter_factory = oslo_middleware:CatchErrors.factory
[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = neutron
[filter:keystonecontext]
paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
[filter:extensions]
paste.filter_factory = neutron.api.extensions:plugin_aware_extension_middleware_factory
[app:neutronversions]
paste.app_factory = neutron.api.versions:Versions.factory
[app:neutronapiapp_v2_0]
paste.app_factory = neutron.api.v2.router:APIRouter.factory
[filter:osprofiler]
paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
The policy.json
defines API access policy.
{
"context_is_admin": "role:admin",
"owner": "tenant_id:%(tenant_id)s",
"admin_or_owner": "rule:context_is_admin or rule:owner",
"context_is_advsvc": "role:advsvc",
"admin_or_network_owner": "rule:context_is_admin or tenant_id:%(network:tenant_id)s",
"admin_owner_or_network_owner": "rule:owner or rule:admin_or_network_owner",
"admin_only": "rule:context_is_admin",
"regular_user": "",
"shared": "field:networks:shared=True",
"shared_subnetpools": "field:subnetpools:shared=True",
"shared_address_scopes": "field:address_scopes:shared=True",
"external": "field:networks:router:external=True",
"default": "rule:admin_or_owner",
"create_subnet": "rule:admin_or_network_owner",
"create_subnet:segment_id": "rule:admin_only",
"create_subnet:service_types": "rule:admin_only",
"get_subnet": "rule:admin_or_owner or rule:shared",
"get_subnet:segment_id": "rule:admin_only",
"update_subnet": "rule:admin_or_network_owner",
"update_subnet:service_types": "rule:admin_only",
"delete_subnet": "rule:admin_or_network_owner",
"create_subnetpool": "",
"create_subnetpool:shared": "rule:admin_only",
"create_subnetpool:is_default": "rule:admin_only",
"get_subnetpool": "rule:admin_or_owner or rule:shared_subnetpools",
"update_subnetpool": "rule:admin_or_owner",
"update_subnetpool:is_default": "rule:admin_only",
"delete_subnetpool": "rule:admin_or_owner",
"create_address_scope": "",
"create_address_scope:shared": "rule:admin_only",
"get_address_scope": "rule:admin_or_owner or rule:shared_address_scopes",
"update_address_scope": "rule:admin_or_owner",
"update_address_scope:shared": "rule:admin_only",
"delete_address_scope": "rule:admin_or_owner",
"create_network": "",
"get_network": "rule:admin_or_owner or rule:shared or rule:external or rule:context_is_advsvc",
"get_network:router:external": "rule:regular_user",
"get_network:segments": "rule:admin_only",
"get_network:provider:network_type": "rule:admin_only",
"get_network:provider:physical_network": "rule:admin_only",
"get_network:provider:segmentation_id": "rule:admin_only",
"get_network:queue_id": "rule:admin_only",
"get_network_ip_availabilities": "rule:admin_only",
"get_network_ip_availability": "rule:admin_only",
"create_network:shared": "rule:admin_only",
"create_network:router:external": "rule:admin_only",
"create_network:is_default": "rule:admin_only",
"create_network:segments": "rule:admin_only",
"create_network:provider:network_type": "rule:admin_only",
"create_network:provider:physical_network": "rule:admin_only",
"create_network:provider:segmentation_id": "rule:admin_only",
"update_network": "rule:admin_or_owner",
"update_network:segments": "rule:admin_only",
"update_network:shared": "rule:admin_only",
"update_network:provider:network_type": "rule:admin_only",
"update_network:provider:physical_network": "rule:admin_only",
"update_network:provider:segmentation_id": "rule:admin_only",
"update_network:router:external": "rule:admin_only",
"delete_network": "rule:admin_or_owner",
"create_segment": "rule:admin_only",
"get_segment": "rule:admin_only",
"update_segment": "rule:admin_only",
"delete_segment": "rule:admin_only",
"network_device": "field:port:device_owner=~^network:",
"create_port": "",
"create_port:device_owner": "not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner",
"create_port:mac_address": "rule:context_is_advsvc or rule:admin_or_network_owner",
"create_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner",
"create_port:port_security_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
"create_port:binding:host_id": "rule:admin_only",
"create_port:binding:profile": "rule:admin_only",
"create_port:mac_learning_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
"create_port:allowed_address_pairs": "rule:admin_or_network_owner",
"get_port": "rule:context_is_advsvc or rule:admin_owner_or_network_owner",
"get_port:queue_id": "rule:admin_only",
"get_port:binding:vif_type": "rule:admin_only",
"get_port:binding:vif_details": "rule:admin_only",
"get_port:binding:host_id": "rule:admin_only",
"get_port:binding:profile": "rule:admin_only",
"update_port": "rule:admin_or_owner or rule:context_is_advsvc",
"update_port:device_owner": "not rule:network_device or rule:context_is_advsvc or rule:admin_or_network_owner",
"update_port:mac_address": "rule:admin_only or rule:context_is_advsvc",
"update_port:fixed_ips": "rule:context_is_advsvc or rule:admin_or_network_owner",
"update_port:port_security_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
"update_port:binding:host_id": "rule:admin_only",
"update_port:binding:profile": "rule:admin_only",
"update_port:mac_learning_enabled": "rule:context_is_advsvc or rule:admin_or_network_owner",
"update_port:allowed_address_pairs": "rule:admin_or_network_owner",
"delete_port": "rule:context_is_advsvc or rule:admin_owner_or_network_owner",
"get_router:ha": "rule:admin_only",
"create_router": "rule:regular_user",
"create_router:external_gateway_info:enable_snat": "rule:admin_only",
"create_router:distributed": "rule:admin_only",
"create_router:ha": "rule:admin_only",
"get_router": "rule:admin_or_owner",
"get_router:distributed": "rule:admin_only",
"update_router:external_gateway_info:enable_snat": "rule:admin_only",
"update_router:distributed": "rule:admin_only",
"update_router:ha": "rule:admin_only",
"delete_router": "rule:admin_or_owner",
"add_router_interface": "rule:admin_or_owner",
"remove_router_interface": "rule:admin_or_owner",
"create_router:external_gateway_info:external_fixed_ips": "rule:admin_only",
"update_router:external_gateway_info:external_fixed_ips": "rule:admin_only",
"insert_rule": "rule:admin_or_owner",
"remove_rule": "rule:admin_or_owner",
"create_qos_queue": "rule:admin_only",
"get_qos_queue": "rule:admin_only",
"update_agent": "rule:admin_only",
"delete_agent": "rule:admin_only",
"get_agent": "rule:admin_only",
"create_dhcp-network": "rule:admin_only",
"delete_dhcp-network": "rule:admin_only",
"get_dhcp-networks": "rule:admin_only",
"create_l3-router": "rule:admin_only",
"delete_l3-router": "rule:admin_only",
"get_l3-routers": "rule:admin_only",
"get_dhcp-agents": "rule:admin_only",
"get_l3-agents": "rule:admin_only",
"get_loadbalancer-agent": "rule:admin_only",
"get_loadbalancer-pools": "rule:admin_only",
"get_agent-loadbalancers": "rule:admin_only",
"get_loadbalancer-hosting-agent": "rule:admin_only",
"create_floatingip": "rule:regular_user",
"create_floatingip:floating_ip_address": "rule:admin_only",
"update_floatingip": "rule:admin_or_owner",
"delete_floatingip": "rule:admin_or_owner",
"get_floatingip": "rule:admin_or_owner",
"create_network_profile": "rule:admin_only",
"update_network_profile": "rule:admin_only",
"delete_network_profile": "rule:admin_only",
"get_network_profiles": "",
"get_network_profile": "",
"update_policy_profiles": "rule:admin_only",
"get_policy_profiles": "",
"get_policy_profile": "",
"create_metering_label": "rule:admin_only",
"delete_metering_label": "rule:admin_only",
"get_metering_label": "rule:admin_only",
"create_metering_label_rule": "rule:admin_only",
"delete_metering_label_rule": "rule:admin_only",
"get_metering_label_rule": "rule:admin_only",
"get_service_provider": "rule:regular_user",
"get_lsn": "rule:admin_only",
"create_lsn": "rule:admin_only",
"create_flavor": "rule:admin_only",
"update_flavor": "rule:admin_only",
"delete_flavor": "rule:admin_only",
"get_flavors": "rule:regular_user",
"get_flavor": "rule:regular_user",
"create_service_profile": "rule:admin_only",
"update_service_profile": "rule:admin_only",
"delete_service_profile": "rule:admin_only",
"get_service_profiles": "rule:admin_only",
"get_service_profile": "rule:admin_only",
"get_policy": "rule:regular_user",
"create_policy": "rule:admin_only",
"update_policy": "rule:admin_only",
"delete_policy": "rule:admin_only",
"get_policy_bandwidth_limit_rule": "rule:regular_user",
"create_policy_bandwidth_limit_rule": "rule:admin_only",
"delete_policy_bandwidth_limit_rule": "rule:admin_only",
"update_policy_bandwidth_limit_rule": "rule:admin_only",
"get_policy_dscp_marking_rule": "rule:regular_user",
"create_policy_dscp_marking_rule": "rule:admin_only",
"delete_policy_dscp_marking_rule": "rule:admin_only",
"update_policy_dscp_marking_rule": "rule:admin_only",
"get_rule_type": "rule:regular_user",
"get_policy_minimum_bandwidth_rule": "rule:regular_user",
"create_policy_minimum_bandwidth_rule": "rule:admin_only",
"delete_policy_minimum_bandwidth_rule": "rule:admin_only",
"update_policy_minimum_bandwidth_rule": "rule:admin_only",
"restrict_wildcard": "(not field:rbac_policy:target_tenant=*) or rule:admin_only",
"create_rbac_policy": "",
"create_rbac_policy:target_tenant": "rule:restrict_wildcard",
"update_rbac_policy": "rule:admin_or_owner",
"update_rbac_policy:target_tenant": "rule:restrict_wildcard and rule:admin_or_owner",
"get_rbac_policy": "rule:admin_or_owner",
"delete_rbac_policy": "rule:admin_or_owner",
"create_flavor_service_profile": "rule:admin_only",
"delete_flavor_service_profile": "rule:admin_only",
"get_flavor_service_profile": "rule:regular_user",
"get_auto_allocated_topology": "rule:admin_or_owner",
"create_trunk": "rule:regular_user",
"get_trunk": "rule:admin_or_owner",
"delete_trunk": "rule:admin_or_owner",
"get_subports": "",
"add_subports": "rule:admin_or_owner",
"remove_subports": "rule:admin_or_owner"
}
The rootwrap.conf
file contains configuration for system utilities
that require privilege escalation to execute.
# Configuration for neutron-rootwrap
# This file should be owned by (and only-writeable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin
# Enable logging to syslog
# Default value is False
use_syslog=False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR
[xenapi]
# XenAPI configuration is only required by the L2 agent if it is to
# target a XenServer/XCP compute host's dom0.
xenapi_connection_url=<None>
xenapi_connection_username=root
xenapi_connection_password=<None>
Although the Networking service supports other plug-ins and agents, this guide only contains configuration files for the following reference architecture components:
The plugins/ml2/ml2_conf.ini
file contains configuration for the ML2
plug-in.
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[ml2]
#
# From neutron.ml2
#
# List of network type driver entrypoints to be loaded from the
# neutron.ml2.type_drivers namespace. (list value)
#type_drivers = local,flat,vlan,gre,vxlan,geneve
# Ordered list of network_types to allocate as tenant networks. The default
# value 'local' is useful for single-box testing but provides no connectivity
# between hosts. (list value)
#tenant_network_types = local
# An ordered list of networking mechanism driver entrypoints to be loaded from
# the neutron.ml2.mechanism_drivers namespace. (list value)
#mechanism_drivers =
# An ordered list of extension driver entrypoints to be loaded from the
# neutron.ml2.extension_drivers namespace. For example: extension_drivers =
# port_security,qos (list value)
#extension_drivers =
# Maximum size of an IP packet (MTU) that can traverse the underlying physical
# network infrastructure without fragmentation when using an overlay/tunnel
# protocol. This option allows specifying a physical network MTU value that
# differs from the default global_physnet_mtu value. (integer value)
#path_mtu = 0
# A list of mappings of physical networks to MTU values. The format of the
# mapping is <physnet>:<mtu val>. This mapping allows specifying a physical
# network MTU value that differs from the default global_physnet_mtu value.
# (list value)
#physical_network_mtus =
# Default network type for external networks when no provider attributes are
# specified. By default it is None, which means that if provider attributes are
# not specified while creating external networks then they will have the same
# type as tenant networks. Allowed values for external_network_type config
# option depend on the network type values configured in type_drivers config
# option. (string value)
#external_network_type = <None>
# IP version of all overlay (tunnel) network endpoints. Use a value of 4 for
# IPv4 or 6 for IPv6. (integer value)
#overlay_ip_version = 4
[ml2_type_flat]
#
# From neutron.ml2
#
# List of physical_network names with which flat networks can be created. Use
# default '*' to allow flat networks with arbitrary physical_network names. Use
# an empty list to disable flat networks. (list value)
#flat_networks = *
[ml2_type_geneve]
#
# From neutron.ml2
#
# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of
# Geneve VNI IDs that are available for tenant network allocation (list value)
#vni_ranges =
# Geneve encapsulation header size is dynamic, this value is used to calculate
# the maximum MTU for the driver. This is the sum of the sizes of the outer ETH
# + IP + UDP + GENEVE header sizes. The default size for this field is 50,
# which is the size of the Geneve header without any additional option headers.
# (integer value)
#max_header_size = 30
[ml2_type_gre]
#
# From neutron.ml2
#
# Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE
# tunnel IDs that are available for tenant network allocation (list value)
#tunnel_id_ranges =
[ml2_type_vlan]
#
# From neutron.ml2
#
# List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network>
# specifying physical_network names usable for VLAN provider and tenant
# networks, as well as ranges of VLAN tags on each available for allocation to
# tenant networks. (list value)
#network_vlan_ranges =
[ml2_type_vxlan]
#
# From neutron.ml2
#
# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of
# VXLAN VNI IDs that are available for tenant network allocation (list value)
#vni_ranges =
# Multicast group for VXLAN. When configured, will enable sending all broadcast
# traffic to this multicast group. When left unconfigured, will disable
# multicast VXLAN mode. (string value)
#vxlan_group = <None>
[securitygroup]
#
# From neutron.ml2
#
# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>
# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true
# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
The plugins/ml2/ml2_conf_sriov.ini
file contains configuration for the
ML2 plug-in specific to SR-IOV.
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[ml2_sriov]
#
# From neutron.ml2.sriov
#
# DEPRECATED: Comma-separated list of supported PCI vendor devices, as defined
# by vendor_id:product_id according to the PCI ID Repository. Default None
# accept all PCI vendor devicesDEPRECATED: This option is deprecated in the
# Newton release and will be removed in the Ocata release. Starting from Ocata
# the mechanism driver will accept all PCI vendor devices. (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#supported_pci_vendor_devs = <None>
The plugins/ml2/linuxbridge_agent.ini
file contains configuration for the
Linux bridge layer-2 agent.
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[agent]
#
# From neutron.ml2.linuxbridge.agent
#
# The number of seconds the agent will wait between polling for local device
# changes. (integer value)
#polling_interval = 2
# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
# value is set to 0, rpc timeout won't be changed (integer value)
#quitting_rpc_timeout = 10
# DEPRECATED: Enable suppression of ARP responses that don't match an IP
# address that belongs to the port from which they originate. Note: This
# prevents the VMs attached to this agent from spoofing, it doesn't protect
# them from other devices which have the capability to spoof (e.g. bare metal
# or VMs attached to agents without this flag set to True). Spoofing rules will
# not be added to any ports that have port security disabled. For LinuxBridge,
# this requires ebtables. For OVS, it requires a version that supports matching
# ARP headers. This option will be removed in Ocata so the only way to disable
# protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true
# Extensions list to use (list value)
#extensions =
[linux_bridge]
#
# From neutron.ml2.linuxbridge.agent
#
# Comma-separated list of <physical_network>:<physical_interface> tuples
# mapping physical network names to the agent's node-specific physical network
# interfaces to be used for flat and VLAN networks. All physical networks
# listed in network_vlan_ranges on the server should have mappings to
# appropriate interfaces on each agent. (list value)
#physical_interface_mappings =
# List of <physical_network>:<physical_bridge> (list value)
#bridge_mappings =
[securitygroup]
#
# From neutron.ml2.linuxbridge.agent
#
# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>
# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true
# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
[vxlan]
#
# From neutron.ml2.linuxbridge.agent
#
# Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin
# using linuxbridge mechanism driver (boolean value)
#enable_vxlan = true
# TTL for vxlan interface protocol packets. (integer value)
#ttl = <None>
# TOS for vxlan interface protocol packets. (integer value)
#tos = <None>
# Multicast group(s) for vxlan interface. A range of group addresses may be
# specified by using CIDR notation. Specifying a range allows different VNIs to
# use different group addresses, reducing or eliminating spurious broadcast
# traffic to the tunnel endpoints. To reserve a unique group for each possible
# (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on
# all the agents. (string value)
#vxlan_group = 224.0.0.1
# IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or
# IPv6 address that resides on one of the host network interfaces. The IP
# version of this value must match the value of the 'overlay_ip_version' option
# in the ML2 plug-in configuration file on the neutron server node(s). (IP
# address value)
#local_ip = <None>
# Extension to use alongside ml2 plugin's l2population mechanism driver. It
# enables the plugin to populate VXLAN forwarding table. (boolean value)
#l2_population = false
# Enable local ARP responder which provides local responses instead of
# performing ARP broadcast into the overlay. Enabling local ARP responder is
# not fully compatible with the allowed-address-pairs extension. (boolean
# value)
#arp_responder = false
The plugins/ml2/sriov_agent.ini
file contains configuration for the
SR-IOV layer-2 agent.
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[agent]
#
# From neutron.ml2.sriov.agent
#
# Extensions list to use (list value)
#extensions =
[sriov_nic]
#
# From neutron.ml2.sriov.agent
#
# Comma-separated list of <physical_network>:<network_device> tuples mapping
# physical network names to the agent's node-specific physical network device
# interfaces of SR-IOV physical function to be used for VLAN networks. All
# physical networks listed in network_vlan_ranges on the server should have
# mappings to appropriate interfaces on each agent. (list value)
#physical_device_mappings =
# Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping
# network_device to the agent's node-specific list of virtual functions that
# should not be used for virtual networking. vfs_to_exclude is a semicolon-
# separated list of virtual functions to exclude from network_device. The
# network_device in the mapping should appear in the physical_device_mappings
# list. (list value)
#exclude_devices =
The plugins/ml2/openvswitch_agent.ini
file contains configuration for the
Open vSwitch (OVS) layer-2 agent.
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[agent]
#
# From neutron.ml2.ovs.agent
#
# The number of seconds the agent will wait between polling for local device
# changes. (integer value)
#polling_interval = 2
# Minimize polling by monitoring ovsdb for interface changes. (boolean value)
#minimize_polling = true
# The number of seconds to wait before respawning the ovsdb monitor after
# losing communication with it. (integer value)
#ovsdb_monitor_respawn_interval = 30
# Network types supported by the agent (gre and/or vxlan). (list value)
#tunnel_types =
# The UDP port to use for VXLAN tunnels. (port value)
# Minimum value: 0
# Maximum value: 65535
#vxlan_udp_port = 4789
# MTU size of veth interfaces (integer value)
#veth_mtu = 9000
# Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve
# tunnel scalability. (boolean value)
#l2_population = false
# Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2
# l2population driver. Allows the switch (when supporting an overlay) to
# respond to an ARP request locally without performing a costly ARP broadcast
# into the overlay. (boolean value)
#arp_responder = false
# DEPRECATED: Enable suppression of ARP responses that don't match an IP
# address that belongs to the port from which they originate. Note: This
# prevents the VMs attached to this agent from spoofing, it doesn't protect
# them from other devices which have the capability to spoof (e.g. bare metal
# or VMs attached to agents without this flag set to True). Spoofing rules will
# not be added to any ports that have port security disabled. For LinuxBridge,
# this requires ebtables. For OVS, it requires a version that supports matching
# ARP headers. This option will be removed in Ocata so the only way to disable
# protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true
# Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying
# GRE/VXLAN tunnel. (boolean value)
#dont_fragment = true
# Make the l2 agent run in DVR mode. (boolean value)
#enable_distributed_routing = false
# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
# value is set to 0, rpc timeout won't be changed (integer value)
#quitting_rpc_timeout = 10
# Reset flow table on start. Setting this to True will cause brief traffic
# interruption. (boolean value)
#drop_flows_on_start = false
# Set or un-set the tunnel header checksum on outgoing IP packet carrying
# GRE/VXLAN tunnel. (boolean value)
#tunnel_csum = false
# DEPRECATED: Selects the Agent Type reported (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#agent_type = Open vSwitch agent
# Extensions list to use (list value)
#extensions =
[ovs]
#
# From neutron.ml2.ovs.agent
#
# Integration bridge to use. Do not change this parameter unless you have a
# good reason to. This is the name of the OVS integration bridge. There is one
# per hypervisor. The integration bridge acts as a virtual 'patch bay'. All VM
# VIFs are attached to this bridge and then 'patched' according to their
# network connectivity. (string value)
#integration_bridge = br-int
# Tunnel bridge to use. (string value)
#tunnel_bridge = br-tun
# Peer patch port in integration bridge for tunnel bridge. (string value)
#int_peer_patch_port = patch-tun
# Peer patch port in tunnel bridge for integration bridge. (string value)
#tun_peer_patch_port = patch-int
# IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or
# IPv6 address that resides on one of the host network interfaces. The IP
# version of this value must match the value of the 'overlay_ip_version' option
# in the ML2 plug-in configuration file on the neutron server node(s). (IP
# address value)
#local_ip = <None>
# Comma-separated list of <physical_network>:<bridge> tuples mapping physical
# network names to the agent's node-specific Open vSwitch bridge names to be
# used for flat and VLAN networks. The length of bridge names should be no more
# than 11. Each bridge must exist, and should have a physical network interface
# configured as a port. All physical networks configured on the server should
# have mappings to appropriate bridges on each agent. Note: If you remove a
# bridge from this mapping, make sure to disconnect it from the integration
# bridge as it won't be managed by the agent anymore. (list value)
#bridge_mappings =
# Use veths instead of patch ports to interconnect the integration bridge to
# physical networks. Support kernel without Open vSwitch patch port support so
# long as it is set to True. (boolean value)
#use_veth_interconnection = false
# OpenFlow interface to use. (string value)
# Allowed values: ovs-ofctl, native
#of_interface = native
# OVS datapath to use. 'system' is the default value and corresponds to the
# kernel datapath. To enable the userspace datapath set this value to 'netdev'.
# (string value)
# Allowed values: system, netdev
#datapath_type = system
# OVS vhost-user socket directory. (string value)
#vhostuser_socket_dir = /var/run/openvswitch
# Address to listen on for OpenFlow connections. Used only for 'native' driver.
# (IP address value)
#of_listen_address = 127.0.0.1
# Port to listen on for OpenFlow connections. Used only for 'native' driver.
# (port value)
# Minimum value: 0
# Maximum value: 65535
#of_listen_port = 6633
# Timeout in seconds to wait for the local switch connecting the controller.
# Used only for 'native' driver. (integer value)
#of_connect_timeout = 30
# Timeout in seconds to wait for a single OpenFlow request. Used only for
# 'native' driver. (integer value)
#of_request_timeout = 10
# The interface for interacting with the OVSDB (string value)
# Allowed values: native, vsctl
#ovsdb_interface = native
# The connection string for the native OVSDB backend. Requires the native
# ovsdb_interface to be enabled. (string value)
#ovsdb_connection = tcp:127.0.0.1:6640
[securitygroup]
#
# From neutron.ml2.ovs.agent
#
# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>
# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true
# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
The dhcp_agent.ini
file contains configuration for the DHCP agent.
[DEFAULT]
#
# From neutron.base.agent
#
# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int
# Uses veth for an OVS interface or not. Support kernels with limited namespace
# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
# value)
#ovs_use_veth = false
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs
# commands will fail with ALARMCLOCK error. (integer value)
#ovs_vsctl_timeout = 10
#
# From neutron.dhcp.agent
#
# The DHCP agent will resync its state with Neutron to recover from any
# transient notification or RPC errors. The interval is number of seconds
# between attempts. (integer value)
#resync_interval = 5
# The driver used to manage the DHCP server. (string value)
#dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
# The DHCP server can assist with providing metadata support on isolated
# networks. Setting this value to True will cause the DHCP server to append
# specific host routes to the DHCP request. The metadata service will only be
# activated when the subnet does not contain any router port. The guest
# instance must be configured to request host routes via DHCP (Option 121).
# This option doesn't have any effect when force_metadata is set to True.
# (boolean value)
#enable_isolated_metadata = false
# In some cases the Neutron router is not present to provide the metadata IP
# but the DHCP server can be used to provide this info. Setting this value will
# force the DHCP server to append specific host routes to the DHCP request. If
# this option is set, then the metadata service will be activated for all the
# networks. (boolean value)
#force_metadata = false
# Allows for serving metadata requests coming from a dedicated metadata access
# network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected
# to a Neutron router from which the VMs send metadata:1 request. In this case
# DHCP Option 121 will not be injected in VMs, as they will be able to reach
# 169.254.169.254 through a router. This option requires
# enable_isolated_metadata = True. (boolean value)
#enable_metadata_network = false
# Number of threads to use during sync process. Should not exceed connection
# pool size configured on server. (integer value)
#num_sync_threads = 4
# Location to store DHCP server config files. (string value)
#dhcp_confs = $state_path/dhcp
# DEPRECATED: Domain to use for building the hostnames. This option is
# deprecated. It has been moved to neutron.conf as dns_domain. It will be
# removed in a future release. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#dhcp_domain = openstacklocal
# Override the default dnsmasq settings with this file. (string value)
#dnsmasq_config_file =
# Comma-separated list of the DNS servers which will be used as forwarders.
# (list value)
#dnsmasq_dns_servers =
# Base log dir for dnsmasq logging. The log contains DHCP and DNS log
# information and is useful for debugging issues with either DHCP or DNS. If
# this section is null, disable dnsmasq log. (string value)
#dnsmasq_base_log_dir = <None>
# Enables the dnsmasq service to provide name resolution for instances via DNS
# resolvers on the host running the DHCP agent. Effectively removes the '--no-
# resolv' option from the dnsmasq process arguments. Adding custom DNS
# resolvers to the 'dnsmasq_dns_servers' option disables this feature. (boolean
# value)
#dnsmasq_local_resolv = false
# Limit number of leases to prevent a denial-of-service. (integer value)
#dnsmasq_lease_max = 16777216
# Use broadcast in DHCP replies. (boolean value)
#dhcp_broadcast_reply = false
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[AGENT]
#
# From neutron.base.agent
#
# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false
# Availability zone of this node (string value)
#availability_zone = nova
The l3_agent.ini
file contains configuration for the Layer-3 (routing)
agent.
[DEFAULT]
#
# From neutron.base.agent
#
# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int
# Uses veth for an OVS interface or not. Support kernels with limited namespace
# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
# value)
#ovs_use_veth = false
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs
# commands will fail with ALARMCLOCK error. (integer value)
#ovs_vsctl_timeout = 10
#
# From neutron.l3.agent
#
# The working mode for the agent. Allowed modes are: 'legacy' - this preserves
# the existing behavior where the L3 agent is deployed on a centralized
# networking node to provide L3 services like DNAT, and SNAT. Use this mode if
# you do not want to adopt DVR. 'dvr' - this mode enables DVR functionality and
# must be used for an L3 agent that runs on a compute host. 'dvr_snat' - this
# enables centralized SNAT support in conjunction with DVR. This mode must be
# used for an L3 agent running on a centralized node (or in single-host
# deployments, e.g. devstack) (string value)
# Allowed values: dvr, dvr_snat, legacy
#agent_mode = legacy
# TCP Port used by Neutron metadata namespace proxy. (port value)
# Minimum value: 0
# Maximum value: 65535
#metadata_port = 9697
# Send this many gratuitous ARPs for HA setup, if less than or equal to 0, the
# feature is disabled (integer value)
#send_arp_for_ha = 3
# Indicates that this L3 agent should also handle routers that do not have an
# external network gateway configured. This option should be True only for a
# single agent in a Neutron deployment, and may be False for all agents if all
# routers must have an external network gateway. (boolean value)
#handle_internal_only_routers = true
# When external_network_bridge is set, each L3 agent can be associated with no
# more than one external network. This value should be set to the UUID of that
# external network. To allow L3 agent support multiple external networks, both
# the external_network_bridge and gateway_external_network_id must be left
# empty. (string value)
#gateway_external_network_id =
# With IPv6, the network used for the external gateway does not need to have an
# associated subnet, since the automatically assigned link-local address (LLA)
# can be used. However, an IPv6 gateway address is needed for use as the next-
# hop for the default route. If no IPv6 gateway address is configured here,
# (and only then) the neutron router will be configured to get its default
# route from router advertisements (RAs) from the upstream router; in which
# case the upstream router must also be configured to send these RAs. The
# ipv6_gateway, when configured, should be the LLA of the interface on the
# upstream router. If a next-hop using a global unique address (GUA) is
# desired, it needs to be done via a subnet allocated to the network and not
# through this parameter. (string value)
#ipv6_gateway =
# Driver used for ipv6 prefix delegation. This needs to be an entry point
# defined in the neutron.agent.linux.pd_drivers namespace. See setup.cfg for
# entry points included with the neutron source. (string value)
#prefix_delegation_driver = dibbler
# Allow running metadata proxy. (boolean value)
#enable_metadata_proxy = true
# Iptables mangle mark used to mark metadata valid requests. This mark will be
# masked with 0xffff so that only the lower 16 bits will be used. (string
# value)
#metadata_access_mark = 0x1
# Iptables mangle mark used to mark ingress from external network. This mark
# will be masked with 0xffff so that only the lower 16 bits will be used.
# (string value)
#external_ingress_mark = 0x2
# DEPRECATED: Name of bridge used for external network traffic. When this
# parameter is set, the L3 agent will plug an interface directly into an
# external bridge which will not allow any wiring by the L2 agent. Using this
# will result in incorrect port statuses. This option is deprecated and will be
# removed in Ocata. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#external_network_bridge =
# Seconds between running periodic tasks. (integer value)
#periodic_interval = 40
# Number of separate API worker processes for service. If not specified, the
# default is equal to the number of CPUs available for best performance.
# (integer value)
#api_workers = <None>
# Number of RPC worker processes for service. (integer value)
#rpc_workers = 1
# Number of RPC worker processes dedicated to state reports queue. (integer
# value)
#rpc_state_report_workers = 1
# Range of seconds to randomly delay when starting the periodic task scheduler
# to reduce stampeding. (Disable by setting to 0) (integer value)
#periodic_fuzzy_delay = 5
# Location to store keepalived/conntrackd config files (string value)
#ha_confs_path = $state_path/ha_confs
# VRRP authentication type (string value)
# Allowed values: AH, PASS
#ha_vrrp_auth_type = PASS
# VRRP authentication password (string value)
#ha_vrrp_auth_password = <None>
# The advertisement interval in seconds (integer value)
#ha_vrrp_advert_int = 2
# Service to handle DHCPv6 Prefix delegation. (string value)
#pd_dhcp_driver = dibbler
# Location to store IPv6 RA config files (string value)
#ra_confs = $state_path/ra
# MinRtrAdvInterval setting for radvd.conf (integer value)
#min_rtr_adv_interval = 30
# MaxRtrAdvInterval setting for radvd.conf (integer value)
#max_rtr_adv_interval = 100
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[AGENT]
#
# From neutron.base.agent
#
# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false
# Availability zone of this node (string value)
#availability_zone = nova
The macvtap_agent.ini
file contains configuration for the macvtap
agent.
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[agent]
#
# From neutron.ml2.macvtap.agent
#
# The number of seconds the agent will wait between polling for local device
# changes. (integer value)
#polling_interval = 2
# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If
# value is set to 0, rpc timeout won't be changed (integer value)
#quitting_rpc_timeout = 10
# DEPRECATED: Enable suppression of ARP responses that don't match an IP
# address that belongs to the port from which they originate. Note: This
# prevents the VMs attached to this agent from spoofing, it doesn't protect
# them from other devices which have the capability to spoof (e.g. bare metal
# or VMs attached to agents without this flag set to True). Spoofing rules will
# not be added to any ports that have port security disabled. For LinuxBridge,
# this requires ebtables. For OVS, it requires a version that supports matching
# ARP headers. This option will be removed in Ocata so the only way to disable
# protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true
[macvtap]
#
# From neutron.ml2.macvtap.agent
#
# Comma-separated list of <physical_network>:<physical_interface> tuples
# mapping physical network names to the agent's node-specific physical network
# interfaces to be used for flat and VLAN networks. All physical networks
# listed in network_vlan_ranges on the server should have mappings to
# appropriate interfaces on each agent. (list value)
#physical_interface_mappings =
[securitygroup]
#
# From neutron.ml2.macvtap.agent
#
# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>
# Controls whether the neutron security group API is enabled in the server. It
# should be false when using no security groups or using the nova security
# group API. (boolean value)
#enable_security_group = true
# Use ipset to speed-up the iptables based security groups. Enabling ipset
# support requires that ipset is installed on L2 agent node. (boolean value)
#enable_ipset = true
The metadata_agent.ini
file contains configuration for the metadata
agent.
[DEFAULT]
#
# From neutron.metadata.agent
#
# Location for Metadata Proxy UNIX domain socket. (string value)
#metadata_proxy_socket = $state_path/metadata_proxy
# User (uid or name) running metadata proxy after its initialization (if empty:
# agent effective user). (string value)
#metadata_proxy_user =
# Group (gid or name) running metadata proxy after its initialization (if
# empty: agent effective group). (string value)
#metadata_proxy_group =
# Certificate Authority public key (CA cert) file for ssl (string value)
#auth_ca_cert = <None>
# IP address used by Nova metadata server. (string value)
#nova_metadata_ip = 127.0.0.1
# TCP Port used by Nova metadata server. (port value)
# Minimum value: 0
# Maximum value: 65535
#nova_metadata_port = 8775
# When proxying metadata requests, Neutron signs the Instance-ID header with a
# shared secret to prevent spoofing. You may select any string for a secret,
# but it must match here and in the configuration used by the Nova Metadata
# Server. NOTE: Nova uses the same config key, but in [neutron] section.
# (string value)
#metadata_proxy_shared_secret =
# Protocol to access nova metadata, http or https (string value)
# Allowed values: http, https
#nova_metadata_protocol = http
# Allow to perform insecure SSL (https) requests to nova metadata (boolean
# value)
#nova_metadata_insecure = false
# Client certificate for nova metadata api server. (string value)
#nova_client_cert =
# Private key of client certificate. (string value)
#nova_client_priv_key =
# Metadata Proxy UNIX domain socket mode, 4 values allowed: 'deduce': deduce
# mode from metadata_proxy_user/group values, 'user': set metadata proxy socket
# mode to 0o644, to use when metadata_proxy_user is agent effective user or
# root, 'group': set metadata proxy socket mode to 0o664, to use when
# metadata_proxy_group is agent effective group or root, 'all': set metadata
# proxy socket mode to 0o666, to use otherwise. (string value)
# Allowed values: deduce, user, group, all
#metadata_proxy_socket_mode = deduce
# Number of separate worker processes for metadata server (defaults to half of
# the number of CPUs) (integer value)
#metadata_workers = 1
# Number of backlog requests to configure the metadata server socket with
# (integer value)
#metadata_backlog = 4096
# DEPRECATED: URL to connect to the cache back end. This option is deprecated
# in the Newton release and will be removed. Please add a [cache] group for
# oslo.cache in your neutron.conf and add "enable" and "backend" options in
# this section. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#cache_url =
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[AGENT]
#
# From neutron.metadata.agent
#
# Seconds between nodes reporting state to server; should be less than
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false
[cache]
#
# From oslo.cache
#
# Prefix for building the configuration dictionary for the cache region. This
# should not need to be changed unless there is another dogpile.cache region
# with the same configuration name. (string value)
#config_prefix = cache.oslo
# Default TTL, in seconds, for any cached item in the dogpile.cache region.
# This applies to any cached method that doesn't have an explicit cache
# expiration time defined for it. (integer value)
#expiration_time = 600
# Dogpile.cache backend module. It is recommended that Memcache or Redis
# (dogpile.cache.redis) be used in production deployments. For eventlet-based
# or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool)
# is recommended. For low thread servers, dogpile.cache.memcached is
# recommended. Test environments with a single instance of the server can use
# the dogpile.cache.memory backend. (string value)
#backend = dogpile.cache.null
# Arguments supplied to the backend module. Specify this option once per
# argument to be passed to the dogpile.cache backend. Example format:
# "<argname>:<value>". (multi valued)
#backend_argument =
# Proxy classes to import that will affect the way the dogpile.cache backend
# functions. See the dogpile.cache documentation on changing-backend-behavior.
# (list value)
#proxies =
# Global toggle for caching. (boolean value)
#enabled = false
# Extra debugging from the cache backend (cache keys, get/set/delete/etc
# calls). This is only really useful if you need to see the specific cache-
# backend get/set/delete calls with the keys/values. Typically this should be
# left set to false. (boolean value)
#debug_cache_backend = false
# Memcache servers in the format of "host:port". (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (list value)
#memcache_servers = localhost:11211
# Number of seconds memcached server is considered dead before it is tried
# again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only).
# (integer value)
#memcache_dead_retry = 300
# Timeout in seconds for every call to a server. (dogpile.cache.memcache and
# oslo_cache.memcache_pool backends only). (integer value)
#memcache_socket_timeout = 3
# Max total number of open connections to every memcached server.
# (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_maxsize = 10
# Number of seconds a connection to memcached is held unused in the pool before
# it is closed. (oslo_cache.memcache_pool backend only). (integer value)
#memcache_pool_unused_timeout = 60
# Number of seconds that an operation will wait to get a memcache client
# connection. (integer value)
#memcache_pool_connection_get_timeout = 10
The metering_agent.ini
file contains configuration for the metering
agent.
[DEFAULT]
#
# From neutron.metering.agent
#
# Metering driver (string value)
#driver = neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver
# Interval between two metering measures (integer value)
#measure_interval = 30
# Interval between two metering reports (integer value)
#report_interval = 300
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
The Networking advanced services such as Load-Balancer-as-a-Service (LBaaS),
Firewall-as-a-Service (FWaaS), and VPN-as-a-Service (VPNaaS) implement
the automatic generation of configuration files. Here are the sample
configuration files and you can generate the latest configuration files
by running the generate_config_file_samples.sh
script provided by
each LBaaS,
FWaaS,
and VPNaaS
services on their root directory.
[DEFAULT]
#
# From neutron.lbaas
#
# Driver to use for scheduling to a default loadbalancer agent (string value)
#loadbalancer_scheduler_driver = neutron_lbaas.agent_scheduler.ChanceScheduler
[certificates]
#
# From neutron.lbaas
#
# Certificate Manager plugin. Defaults to barbican. (string value)
#cert_manager_type = barbican
# Name of the Barbican authentication method to use (string value)
#barbican_auth = barbican_acl_auth
# Absolute path to the certificate storage directory. Defaults to
# env[OS_LBAAS_TLS_STORAGE]. (string value)
#storage_path = /var/lib/neutron-lbaas/certificates/
[quotas]
#
# From neutron.lbaas
#
# Number of LoadBalancers allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_loadbalancer = 10
# Number of Loadbalancer Listeners allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_listener = -1
# Number of pools allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_pool = 10
# Number of pool members allowed per tenant. A negative value means unlimited.
# (integer value)
#quota_member = -1
# Number of health monitors allowed per tenant. A negative value means
# unlimited. (integer value)
#quota_healthmonitor = -1
[service_auth]
#
# From neutron.lbaas
#
# Authentication endpoint (string value)
#auth_url = http://127.0.0.1:5000/v2.0
# The service admin user name (string value)
#admin_user = admin
# The service admin tenant name (string value)
#admin_tenant_name = admin
# The service admin password (string value)
#admin_password = password
# The admin user domain name (string value)
#admin_user_domain = admin
# The admin project domain name (string value)
#admin_project_domain = admin
# The deployment region (string value)
#region = RegionOne
# The name of the service (string value)
#service_name = lbaas
# The auth version used to authenticate (string value)
#auth_version = 2
# The endpoint_type to be used (string value)
#endpoint_type = public
# Disable server certificate verification (boolean value)
#insecure = false
[service_providers]
#
# From neutron.lbaas
#
# Defines providers for advanced services using the format:
# <service_type>:<name>:<driver>[:default] (multi valued)
#service_provider =
[DEFAULT]
#
# From neutron.lbaas.agent
#
# Seconds between periodic task runs (integer value)
#periodic_interval = 10
# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int
# Uses veth for an OVS interface or not. Support kernels with limited namespace
# support (e.g. RHEL 6.5) so long as ovs_use_veth is set to True. (boolean
# value)
#ovs_use_veth = false
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[DEFAULT]
[haproxy]
#
# From neutron.lbaas.service
#
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
# Seconds between periodic task runs (integer value)
#periodic_interval = 10
[octavia]
#
# From neutron.lbaas.service
#
# URL of Octavia controller root (string value)
#base_url = http://127.0.0.1:9876
# Interval in seconds to poll octavia when an entity is created, updated, or
# deleted. (integer value)
#request_poll_interval = 3
# Time to stop polling octavia when a status of an entity does not change.
# (integer value)
#request_poll_timeout = 100
# True if Octavia will be responsible for allocating the VIP. False if neutron-
# lbaas will allocate it and pass to Octavia. (boolean value)
#allocates_vip = false
[radwarev2]
#
# From neutron.lbaas.service
#
# IP address of vDirect server. (string value)
#vdirect_address = <None>
# IP address of secondary vDirect server. (string value)
#ha_secondary_address = <None>
# vDirect user name. (string value)
#vdirect_user = vDirect
# vDirect user password. (string value)
#vdirect_password = radware
# Service ADC type. Default: VA. (string value)
#service_adc_type = VA
# Service ADC version. (string value)
#service_adc_version =
# Enables or disables the Service HA pair. Default: False. (boolean value)
#service_ha_pair = false
# Service throughput. Default: 1000. (integer value)
#service_throughput = 1000
# Service SSL throughput. Default: 100. (integer value)
#service_ssl_throughput = 100
# Service compression throughput. Default: 100. (integer value)
#service_compression_throughput = 100
# Size of service cache. Default: 20. (integer value)
#service_cache = 20
# Resource pool IDs. (list value)
#service_resource_pool_ids =
# A required VLAN for the interswitch link to use. (integer value)
#service_isl_vlan = -1
# Enable or disable Alteon interswitch link for stateful session failover.
# Default: False. (boolean value)
#service_session_mirroring_enabled = false
# Name of the workflow template. Default: os_lb_v2. (string value)
#workflow_template_name = os_lb_v2
# Name of child workflow templates used.Default: manage_l3 (list value)
#child_workflow_template_names = manage_l3
# Parameter for l2_l3 workflow constructor. (dict value)
#workflow_params = allocate_ha_ips:True,allocate_ha_vrrp:True,data_ip_address:192.168.200.99,data_ip_mask:255.255.255.0,data_port:1,gateway:192.168.200.1,ha_ip_pool_name:default,ha_network_name:HA-Network,ha_port:2,twoleg_enabled:_REPLACE_
# Name of the workflow action. Default: apply. (string value)
#workflow_action_name = apply
# Name of the workflow action for statistics. Default: stats. (string value)
#stats_action_name = stats
[radwarev2_debug]
#
# From neutron.lbaas.service
#
# Provision ADC service? (boolean value)
#provision_service = true
# Configule ADC with L3 parameters? (boolean value)
#configure_l3 = true
# Configule ADC with L4 parameters? (boolean value)
#configure_l4 = true
[DEFAULT]
[service_providers]
#
# From neutron.vpnaas
#
# Defines providers for advanced services using the format:
# <service_type>:<name>:<driver>[:default] (multi valued)
#service_provider =
[DEFAULT]
[ipsec]
#
# From neutron.vpnaas.agent
#
# Location to store ipsec server config files (string value)
#config_base_dir = $state_path/ipsec
# Interval for checking ipsec status (integer value)
#ipsec_status_check_interval = 60
# Enable detail logging for ipsec pluto process. If the flag set to True, the
# detailed logging will be written into config_base_dir/<pid>/log. Note: This
# setting applies to OpenSwan and LibreSwan only. StrongSwan logs to syslog.
# (boolean value)
#enable_detailed_logging = false
[pluto]
#
# From neutron.vpnaas.agent
#
# Initial interval in seconds for checking if pluto daemon is shutdown (integer
# value)
# Deprecated group/name - [libreswan]/shutdown_check_timeout
#shutdown_check_timeout = 1
# The maximum number of retries for checking for pluto daemon shutdown (integer
# value)
# Deprecated group/name - [libreswan]/shutdown_check_retries
#shutdown_check_retries = 5
# A factor to increase the retry interval for each retry (floating point value)
# Deprecated group/name - [libreswan]/shutdown_check_back_off
#shutdown_check_back_off = 1.5
# Enable this flag to avoid from unnecessary restart (boolean value)
# Deprecated group/name - [libreswan]/restart_check_config
#restart_check_config = false
[strongswan]
#
# From neutron.vpnaas.agent
#
# Template file for ipsec configuration. (string value)
#ipsec_config_template = /home/openstack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.conf.template
# Template file for strongswan configuration. (string value)
#strongswan_config_template = /home/openstack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/strongswan.conf.template
# Template file for ipsec secret configuration. (string value)
#ipsec_secret_template = /home/openstack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.secret.template
# The area where default StrongSwan configuration files are located. (string
# value)
#default_config_area = /etc/strongswan.d
[vpnagent]
#
# From neutron.vpnaas.agent
#
# The vpn device drivers Neutron will use (multi valued)
#vpn_device_driver = neutron_vpnaas.services.vpn.device_drivers.ipsec.OpenSwanDriver, neutron_vpnaas.services.vpn.device_drivers.cisco_ipsec.CiscoCsrIPsecDriver, neutron_vpnaas.services.vpn.device_drivers.vyatta_ipsec.VyattaIPSecDriver, neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.fedora_strongswan_ipsec.FedoraStrongSwanDriver, neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver
Option = default value | (Type) Help string |
---|---|
[DEFAULT] cache_url = |
(StrOpt) URL to connect to the cache back end. This option is deprecated in the Newton release and will be removed. Please add a [cache] group for oslo.cache in your neutron.conf and add “enable” and “backend” options in this section. |
[AGENT] debug_iptables_rules = False |
(BoolOpt) Duplicate every iptables difference calculation to ensure the format being generated matches the format of iptables-save. This option should not be turned on for production systems because it imposes a performance penalty. |
[FDB] shared_physical_device_mappings = |
(ListOpt) Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent’s node-specific shared physical network device between SR-IOV and OVS or SR-IOV and linux bridge |
[cache] backend = dogpile.cache.null |
(StrOpt) Dogpile.cache backend module. It is recommended that Memcache or Redis (dogpile.cache.redis) be used in production deployments. For eventlet-based or highly threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For low thread servers, dogpile.cache.memcached is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. |
[cache] backend_argument = [] |
(MultiStrOpt) Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: “<argname>:<value>”. |
[cache] config_prefix = cache.oslo |
(StrOpt) Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. |
[cache] debug_cache_backend = False |
(BoolOpt) Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. |
[cache] enabled = False |
(BoolOpt) Global toggle for caching. |
[cache] expiration_time = 600 |
(IntOpt) Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn’t have an explicit cache expiration time defined for it. |
[cache] memcache_dead_retry = 300 |
(IntOpt) Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). |
[cache] memcache_pool_connection_get_timeout = 10 |
(IntOpt) Number of seconds that an operation will wait to get a memcache client connection. |
[cache] memcache_pool_maxsize = 10 |
(IntOpt) Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). |
[cache] memcache_pool_unused_timeout = 60 |
(IntOpt) Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). |
[cache] memcache_servers = localhost:11211 |
(ListOpt) Memcache servers in the format of “host:port”. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). |
[cache] memcache_socket_timeout = 3 |
(IntOpt) Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). |
[cache] proxies = |
(ListOpt) Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. |
[ml2] overlay_ip_version = 4 |
(IntOpt) IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6. |
[profiler] connection_string = messaging:// |
(StrOpt) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: * messaging://: use oslo_messaging driver for sending notifications. |
[profiler] enabled = False |
(BoolOpt) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: * True: Enables the feature * False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. |
[profiler] hmac_keys = SECRET_KEY |
(StrOpt) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
[profiler] trace_sqlalchemy = False |
(BoolOpt) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values: * True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. * False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] allow_pagination |
False |
True |
[DEFAULT] allow_sorting |
False |
True |
[DEFAULT] dnsmasq_dns_servers |
None |
|
[DEFAULT] external_network_bridge |
br-ex |
|
[DEFAULT] ipam_driver |
None |
internal |
[OVS] of_interface |
ovs-ofctl |
native |
[OVS] ovsdb_interface |
vsctl |
native |
[ml2] path_mtu |
1500 |
0 |
[ml2_sriov] supported_pci_vendor_devs |
15b3:1004, 8086:10ca |
None |
[ml2_type_geneve] max_header_size |
50 |
30 |
Deprecated option | New Option |
---|---|
[DEFAULT] min_l3_agents_per_router |
None |
[DEFAULT] use_syslog |
None |
[ml2_sriov] supported_pci_vendor_devs |
None |
This chapter explains the Networking service configuration options. For installation prerequisites, steps, and use cases, see the Installation Tutorials and Guides for your distribution (docs.openstack.org) and the OpenStack Administrator Guide.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
Object Storage (swift) is a robust, highly scalable and fault tolerant storage platform for unstructured data such as objects. Objects are stored bits, accessed through a RESTful, HTTP-based interface. You cannot access data at the block or file level. Object Storage is commonly used to archive and back up data, with use cases in virtual machine image, photo, video, and music storage.
Object Storage provides a high degree of availability, throughput, and performance with its scale out architecture. Each object is replicated across multiple servers, residing within the same data center or across data centers, which mitigates the risk of network and hardware failure. In the event of hardware failure, Object Storage will automatically copy objects to a new location to ensure that your chosen number of copies are always available.
Object Storage also employs erasure coding. Erasure coding is a set of algorithms that allows the reconstruction of missing data from a set of original data. In theory, erasure coding uses less storage capacity with similar durability characteristics as replicas. From an application perspective, erasure coding support is transparent. Object Storage implements erasure coding as a Storage Policy.
Object Storage is an eventually consistent distributed storage platform; it sacrifices consistency for maximum availability and partition tolerance. Object Storage enables you to create a reliable platform by using commodity hardware and inexpensive storage.
For more information, review the key concepts in the developer documentation at docs.openstack.org/developer/swift/.
Object Storage service uses multiple configuration files for multiple services
and background daemons, and paste.deploy
to manage server configurations.
For more information about paste.deploy
, see: http://pythonpaste.org/deploy/.
Default configuration options are set in the [DEFAULT]
section, and any
options specified there can be overridden in any of the other sections when the
syntax set option_name = value
is in place.
Configuration for servers and daemons can be expressed together in the same file for each type of server, or separately. If a required section for the service trying to start is missing, there will be an error. Sections not used by the service are ignored.
Consider the example of an Object Storage node. By convention configuration for
the object-server
, object-updater
, object-replicator
, and
object-auditor
exist in a single file /etc/swift/object-server.conf
:
[DEFAULT]
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
reclaim_age = 259200
[object-updater]
[object-auditor]
Note
Default constraints can be overridden in swift.conf
. For example,
you can change the maximum object size and other variables.
Object Storage services expect a configuration path as the first argument:
$ swift-object-auditor
Usage: swift-object-auditor CONFIG [options]
Error: missing config path argument
If you omit the object-auditor section, this file cannot be used as the
configuration path when starting the swift-object-auditor
daemon:
$ swift-object-auditor /etc/swift/object-server.conf
Unable to find object-auditor config section in /etc/swift/object-server.conf
If the configuration path is a directory instead of a file, all of the files in
the directory with the file extension .conf
will be combined to generate
the configuration object which is delivered to the Object Storage service. This
is referred to generally as directory-based configuration.
Directory-based configuration leverages ConfigParser
‘s native multi-file
support. Files ending in .conf
in the given directory are parsed in
lexicographical order. File names starting with .
are ignored. A mixture of
file and directory configuration paths is not supported. If the configuration
path is a file, only that file will be parsed.
The Object Storage service management tool swift-init
has adopted the
convention of looking for /etc/swift/{type}-server.conf.d/
if the file
/etc/swift/{type}-server.conf
file does not exist.
When using directory-based configuration, if the same option under the same section appears more than once in different files, the last value parsed is said to override previous occurrences. You can ensure proper override precedence by prefixing the files in the configuration directory with numerical values, as in the following example file layout:
/etc/swift/
default.base
object-server.conf.d/
000_default.conf -> ../default.base
001_default-override.conf
010_server.conf
020_replicator.conf
030_updater.conf
040_auditor.conf
You can inspect the resulting combined configuration object using the
swift-config
command-line tool.
All the services of an Object Store deployment share a common configuration in
the [swift-hash]
section of the /etc/swift/swift.conf
file. The
swift_hash_path_suffix
and swift_hash_path_prefix
values must be
identical on all the nodes.
Configuration option = Default value | Description |
---|---|
swift_hash_path_prefix = changeme |
A prefix used by hash_path to offer a bit more security when generating hashes for paths. It simply appends this value to all paths; if someone knows this suffix, it’s easier for them to guess the hash a path will end up with. New installations are advised to set this parameter to a random secret, which would not be disclosed ouside the organization. The same secret needs to be used by all swift servers of the same cluster. Existing installations should set this parameter to an empty string. |
swift_hash_path_suffix = changeme |
A suffix used by hash_path to offer a bit more security when generating hashes for paths. It simply appends this value to all paths; if someone knows this suffix, it’s easier for them to guess the hash a path will end up with. New installations are advised to set this parameter to a random secret, which would not be disclosed ouside the organization. The same secret needs to be used by all swift servers of the same cluster. Existing installations should set this parameter to an empty string. |
Find an example object server configuration at
etc/object-server.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
backlog = 4096 |
Maximum number of allowed pending TCP connections |
bind_ip = 0.0.0.0 |
IP Address for server to bind to |
bind_port = 6000 |
Port for server to bind to |
bind_timeout = 30 |
Seconds to attempt bind before giving up |
client_timeout = 60 |
Timeout to read one chunk from a client external services |
conn_timeout = 0.5 |
Connection timeout to external services |
container_update_timeout = 1.0 |
Time to wait while sending a container update on object update. object server. For most cases, this should be |
devices = /srv/node |
Parent directory of where devices are mounted |
disable_fallocate = false |
Disable “fast fail” fallocate checks if the underlying filesystem does not support it. |
disk_chunk_size = 65536 |
Size of chunks to read/write to disk |
eventlet_debug = false |
If true, turn on debug logging for eventlet |
expiring_objects_account_name = expiring_objects |
Account name for the expiring objects |
expiring_objects_container_divisor = 86400 |
Divisor for the expiring objects container |
fallocate_reserve = 0 |
You can set fallocate_reserve to the number of bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be |
log_address = /dev/log |
Location where syslog sends the logs to |
log_custom_handlers = `` `` |
Comma-separated list of functions to call to setup custom log handlers. |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_max_line_length = 0 |
Caps the length of log lines to the value given; no limit if set to 0, the default. |
log_name = swift |
Label used when logging |
log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. |
log_statsd_host = localhost |
If not set, the StatsD feature is disabled. |
log_statsd_metric_prefix = `` `` |
Value will be prepended to every metric sent to the StatsD server. |
log_statsd_port = 8125 |
Port value for the StatsD server. |
log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. |
log_udp_host = `` `` |
If not set, the UDP receiver for syslog is disabled. |
log_udp_port = 514 |
Port value for UDP receiver, if enabled. |
max_clients = 1024 |
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing other workers a chance to process it. |
mount_check = true |
Whether or not check if the devices are mounted to prevent accidentally writing to the root device |
network_chunk_size = 65536 |
Size of chunks to read/write over the network |
node_timeout = 3 |
Request timeout to external services |
servers_per_port = 0 |
If each disk in each storage policy ring has unique port numbers for its “ip” value, you can use this setting to have each object-server worker only service requests for the single disk matching the port in the ring. The value of this setting determines how many worker processes run for each port (disk) in the |
swift_dir = /etc/swift |
Swift configuration directory |
user = swift |
User to run as |
workers = auto |
a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. |
Configuration option = Default value | Description |
---|---|
allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object |
Comma-separated list of headers that can be set in metadata of an object |
auto_create_account_prefix = . |
Prefix to use when automatically creating accounts |
keep_cache_private = false |
Allow non-public objects to stay in kernel’s buffer cache |
keep_cache_size = 5242880 |
Largest object size to keep in buffer cache |
max_upload_time = 86400 |
Maximum time allowed to upload an object |
mb_per_sync = 512 |
On PUT requests, sync file every n MB |
replication_concurrency = 4 |
Set to restrict the number of concurrent incoming REPLICATION requests; set to 0 for unlimited |
replication_failure_ratio = 1.0 |
If the value of failures / successes of REPLICATION subrequests exceeds this ratio, the overall REPLICATION request will be aborted |
replication_failure_threshold = 100 |
The number of subrequest failures before the replication_failure_ratio is checked |
replication_lock_timeout = 15 |
Number of seconds to wait for an existing replication device lock before giving up. |
replication_one_per_device = True |
Restricts incoming REPLICATION requests to one per device, replication_currency above allowing. This can help control I/O to each device, but you may wish to set this to False to allow multiple REPLICATION requests (up to the above replication_concurrency setting) per device. |
replication_server = false |
If defined, tells server how to handle replication verbs in requests. When set to True (or 1), only replication verbs will be accepted. When set to False, replication verbs will be rejected. When undefined, server will accept any verb in the request. |
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_level = INFO |
Log level |
set log_name = object-server |
Label to use when logging |
set log_requests = true |
Whether or not to log requests |
slow = 0 |
If > 0, Minimum time in seconds for a PUT or DELETE request to complete |
splice = no |
Use splice() for zero-copy object GETs. This requires Linux kernel version 3.0 or greater. When you set “splice = yes” but the kernel does not support it, error messages will appear in the object server logs at startup, but your object servers should continue to function. |
threads_per_disk = 0 |
Size of the per-disk thread pool used for performing disk I/O. The default of 0 means to not use a per-disk thread pool. It is recommended to keep this value small, as large values can result in high read latencies due to large queue depths. A good starting point is 4 threads per disk. |
use = egg:swift#object |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
pipeline = healthcheck recon object-server |
Pipeline to use for processing operations. |
Configuration option = Default value | Description |
---|---|
concurrency = 1 |
Number of replication workers to spawn |
daemonize = on |
Whether or not to run replication as a daemon |
handoff_delete = auto |
By default handoff partitions will be removed when it has successfully replicated to all the canonical nodes. If set to an integer n, it will remove the partition if it is successfully replicated to n nodes. The default setting should not be changed, except for extremem situations. This uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default). |
handoffs_first = False |
If set to True, partitions that are not supposed to be on the node will be replicated first. The default setting should not be changed, except for extreme situations. |
http_timeout = 60 |
Maximum duration for an HTTP request |
interval = 30 |
Minimum time for a pass to take |
lockup_timeout = 1800 |
Attempts to kill all workers if nothing replications for lockup_timeout seconds |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = object-replicator |
Label used when logging |
node_timeout = <whatever's in the DEFAULT section or 10> |
Request timeout to external services |
reclaim_age = 604800 |
Time elapsed in seconds before an object can be reclaimed |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
ring_check_interval = 15 |
How often (in seconds) to check the ring |
rsync_bwlimit = 0 |
bandwidth limit for rsync in kB/s. 0 means unlimited |
rsync_compress = no |
Allows rsync to compress data which is transmitted to the destination node during sync. However, this applies only when the destination node is in a different region than the local one. Note Objects that are already compressed (for example: .tar.gz, .mp3) might slow down the syncing process. |
rsync_error_log_line_length = 0 |
Limits the length of the rsync error log lines. 0 will log the entire line. |
rsync_io_timeout = 30 |
Passed to rsync for a max duration (seconds) of an I/O op |
rsync_module = {replication_ip}::object |
Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default). |
rsync_timeout = 900 |
Max duration (seconds) of a partition rsync |
run_pause = 30 |
Time in seconds to wait between replication passes |
stats_interval = 300 |
Interval in seconds between logging replication statistics |
sync_method = rsync |
default is rsync, alternative is ssync |
Configuration option = Default value | Description |
---|---|
concurrency = 1 |
Number of replication workers to spawn |
interval = 300 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = object-updater |
Label used when logging |
node_timeout = <whatever's in the DEFAULT section or 10> |
Request timeout to external services |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
slowdown = 0.01 |
Time in seconds to wait between objects |
Configuration option = Default value | Description |
---|---|
bytes_per_second = 10000000 |
Maximum bytes audited per second. Should be tuned according to individual system specs. 0 is unlimited. mounted to prevent accidentally writing to the root device process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. By increasing the number of workers to a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. underlying filesystem does not support it. to setup custom log handlers. bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. container server. For most cases, this should be |
concurrency = 1 |
Number of replication workers to spawn |
disk_chunk_size = 65536 |
Size of chunks to read/write to disk |
files_per_second = 20 |
Maximum files audited per second. Should be tuned according to individual system specs. 0 is unlimited. |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = object-auditor |
Label used when logging |
log_time = 3600 |
Frequency of status logs in seconds. |
object_size_stats = |
Takes a comma-separated list of ints. When set, the object auditor will increment a counter for every object whose size is greater or equal to the given breaking points and reports the result after a full scan. |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
zero_byte_files_per_second = 50 |
Maximum zero byte files audited per second. |
Configuration option = Default value | Description |
---|---|
disable_path = |
An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE” |
use = egg:swift#healthcheck |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
recon_lock_path = /var/lock |
Directory where lock files will be stored |
use = egg:swift#recon |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
dump_interval = 5.0 |
the profile data will be dumped to local disk based on above naming rule in this interval (seconds). |
dump_timestamp = false |
Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory. |
flush_at_shutdown = false |
Clears the data when the wsgi server shutdowns. |
log_filename_prefix = /tmp/log/swift/profile/default.profile |
This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/object.profile |
path = /__profile__ |
This is the path of the URL to access the mini web UI. |
profile_module = eventlet.green.profile |
This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc. |
unwind = false |
unwind the iterator of applications |
use = egg:swift#xprofile |
Entry point of paste.deploy in the server |
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6200
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
# expiring_objects_container_divisor = 86400
# expiring_objects_account_name = expiring_objects
#
# Use an integer to override the number of pre-forked processes that will
# accept connections. NOTE: if servers_per_port is set, this setting is
# ignored.
# workers = auto
#
# Make object-server run this many worker processes per unique port of "local"
# ring devices across all storage policies. The default value of 0 disables this
# feature.
# servers_per_port = 0
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# Time to wait while attempting to connect to another backend node.
# conn_timeout = 0.5
# Time to wait while sending each chunk of data to another backend node.
# node_timeout = 3
# Time to wait while sending a container update on object update.
# container_update_timeout = 1.0
# Time to wait while receiving each chunk of data from a client or another
# backend node.
# client_timeout = 60
#
# network_chunk_size = 65536
# disk_chunk_size = 65536
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = healthcheck recon object-server
[app:object-server]
use = egg:swift#object
# You can override the default log routing for this app here:
# set log_name = object-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# max_upload_time = 86400
#
# slow is the total amount of seconds an object PUT/DELETE request takes at
# least. If it is faster, the object server will sleep this amount of time minus
# the already passed transaction time. This is only useful for simulating slow
# devices on storage nodes during testing and development.
# slow = 0
#
# Objects smaller than this are not evicted from the buffercache once read
# keep_cache_size = 5242880
#
# If true, objects for authenticated GET requests may be kept in buffer cache
# if small enough
# keep_cache_private = false
#
# on PUTs, sync data every n MB
# mb_per_sync = 512
#
# Comma separated list of headers that can be set in metadata on an object.
# This list is in addition to X-Object-Meta-* headers and cannot include
# Content-Type, etag, Content-Length, or deleted
# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
#
# Set to restrict the number of concurrent incoming SSYNC requests
# Set to 0 for unlimited
# Note that SSYNC requests are only used by the object reconstructor or the
# object replicator when configured to use ssync.
# replication_concurrency = 4
#
# Restricts incoming SSYNC requests to one per device,
# replication_currency above allowing. This can help control I/O to each
# device, but you may wish to set this to False to allow multiple SSYNC
# requests (up to the above replication_concurrency setting) per device.
# replication_one_per_device = True
#
# Number of seconds to wait for an existing replication device lock before
# giving up.
# replication_lock_timeout = 15
#
# These next two settings control when the SSYNC subrequest handler will
# abort an incoming SSYNC attempt. An abort will occur if there are at
# least threshold number of failures and the value of failures / successes
# exceeds the ratio. The defaults of 100 and 1.0 means that at least 100
# failures have to occur and there have to be more failures than successes for
# an abort to occur.
# replication_failure_threshold = 100
# replication_failure_ratio = 1.0
#
# Use splice() for zero-copy object GETs. This requires Linux kernel
# version 3.0 or greater. If you set "splice = yes" but the kernel
# does not support it, error messages will appear in the object server
# logs at startup, but your object servers should continue to function.
#
# splice = no
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =
[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift
#recon_lock_path = /var/lock
[object-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# concurrency = 1
# stats_interval = 300
#
# default is rsync, alternative is ssync
# sync_method = rsync
#
# max duration of a partition rsync
# rsync_timeout = 900
#
# bandwidth limit for rsync in kB/s. 0 means unlimited
# rsync_bwlimit = 0
#
# passed to rsync for io op timeout
# rsync_io_timeout = 30
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# NOTE: Objects that are already compressed (for example: .tar.gz, .mp3) might
# slow down the syncing process.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::object
#
# node_timeout = <whatever's in the DEFAULT section or 10>
# max duration of an http request; this is for REPLICATE finalization calls and
# so should be longer than node_timeout
# http_timeout = 60
#
# attempts to kill all workers if nothing replicates for lockup_timeout seconds
# lockup_timeout = 1800
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
#
# limits how long rsync error log lines are
# 0 means to log the entire line
# rsync_error_log_line_length = 0
#
# handoffs_first and handoff_delete are options for a special case
# such as disk full in the cluster. These two options SHOULD NOT BE
# CHANGED, except for such an extreme situations. (e.g. disks filled up
# or are about to fill up. Anyway, DO NOT let your drives fill up)
# handoffs_first is the flag to replicate handoffs prior to canonical
# partitions. It allows to force syncing and deleting handoffs quickly.
# If set to a True value(e.g. "True" or "1"), partitions
# that are not supposed to be on the node will be replicated first.
# handoffs_first = False
#
# handoff_delete is the number of replicas which are ensured in swift.
# If the number less than the number of replicas is set, object-replicator
# could delete local handoffs even if all replicas are not ensured in the
# cluster. Object-replicator would remove local handoff partition directories
# after syncing partition when the number of successful responses is greater
# than or equal to this number. By default(auto), handoff partitions will be
# removed when it has successfully replicated to all the canonical nodes.
# handoff_delete = auto
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[object-reconstructor]
# You can override the default log routing for this app here (don't use set!):
# Unless otherwise noted, each setting below has the same meaning as described
# in the [object-replicator] section, however these settings apply to the EC
# reconstructor
#
# log_name = object-reconstructor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# daemonize = on
#
# Time in seconds to wait between reconstruction passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# concurrency = 1
# stats_interval = 300
# node_timeout = 10
# http_timeout = 60
# lockup_timeout = 1800
# reclaim_age = 604800
# ring_check_interval = 15
# recon_cache_path = /var/cache/swift
# handoffs_first = False
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[object-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300
# concurrency = 1
# node_timeout = <whatever's in the DEFAULT section or 10>
# slowdown will sleep that amount between objects
# slowdown = 0.01
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[object-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Time in seconds to wait between auditor passes
# interval = 30
#
# You can set the disk chunk size that the auditor uses making it larger if
# you like for more efficient local auditing of larger objects
# disk_chunk_size = 65536
# files_per_second = 20
# concurrency = 1
# bytes_per_second = 10000000
# log_time = 3600
# zero_byte_files_per_second = 50
# recon_cache_path = /var/cache/swift
# Takes a comma separated list of ints. If set, the object auditor will
# increment a counter for every object whose size is <= to the given break
# points and report the result after a full scan.
# object_size_stats =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
# The auditor will cleanup old rsync tempfiles after they are "old
# enough" to delete. You can configure the time elapsed in seconds
# before rsync tempfiles will be unlinked, or the default value of
# "auto" try to use object-replicator's rsync_timeout + 900 and fallback
# to 86400 (1 day).
# rsync_tempfile_timeout = auto
# Note: Put it at the beginning of the pipleline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file. Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/object.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
Find an example object expirer configuration at
etc/object-expirer.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
log_address = /dev/log |
Location where syslog sends the logs to |
log_custom_handlers = |
Comma-separated list of functions to call to setup custom log handlers. |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_max_line_length = 0 |
Caps the length of log lines to the value given; no limit if set to 0, the default. |
log_name = swift |
Label used when logging |
log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. |
log_statsd_host = localhost |
If not set, the StatsD feature is disabled. |
log_statsd_metric_prefix = |
Value will be prepended to every metric sent to the StatsD server. |
log_statsd_port = 8125 |
Port value for the StatsD server. |
log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. |
log_udp_host = |
If not set, the UDP receiver for syslog is disabled. |
log_udp_port = 514 |
Port value for UDP receiver, if enabled. |
swift_dir = /etc/swift |
Swift configuration directory |
user = swift |
User to run as |
Configuration option = Default value | Description |
---|---|
use = egg:swift#proxy |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
use = egg:swift#memcache |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
use = egg:swift#catch_errors |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
access_log_address = /dev/log |
Location where syslog sends the logs to. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_facility = LOG_LOCAL0 |
Syslog facility to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_headers = false |
Header to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_headers_only = |
If access_log_headers is True and access_log_headers_only is set only these headers are logged. Multiple headers can be defined as comma separated list like this: access_log_headers_only = Host, X-Object-Meta-Mtime |
access_log_level = INFO |
Syslog logging level to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_name = swift |
Label used when logging. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_host = localhost |
You can use log_statsd_* from [DEFAULT], or override them here. StatsD server. IPv4/IPv6 addresses and hostnames are supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used. |
access_log_statsd_metric_prefix = |
Value will be prepended to every metric sent to the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_port = 8125 |
Port value for the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_udp_host = |
If not set, the UDP receiver for syslog is disabled. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_udp_port = 514 |
Port value for UDP receiver, if enabled. If not set, logging directives from [DEFAULT] without “access_” will be used. |
log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS |
What HTTP methods are allowed for StatsD logging (comma-sep). request methods not in this list will have “BAD_METHOD” for the <verb> portion of the metric. |
reveal_sensitive_prefix = 16 |
By default, the X-Auth-Token is logged. To obscure the value, set reveal_sensitive_prefix to the number of characters to log. For example, if set to 12, only the first 12 characters of the token appear in the log. An unauthorized access of the log file won’t allow unauthorized usage of the token. However, the first 12 or so characters is unique enough that you can trace/debug token usage. Set to 0 to suppress the token completely (replaced by ‘...’ in the log). Note reveal_sensitive_prefix will not affect the value logged with access_log_headers=True. |
use = egg:swift#proxy_logging |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
auto_create_account_prefix = . |
Prefix to use when automatically creating accounts |
concurrency = 1 |
Number of replication workers to spawn |
expiring_objects_account_name = expiring_objects |
Account name for expiring objects. |
interval = 300 |
Minimum time for a pass to take |
process = 0 |
(it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. |
processes = 0 |
for each port (disk) in the ring. If you have 24 disks per server, and this setting is 4, then each storage node will have 1 + (24 * 4) = 97 total object-server processes running. This gives complete I/O isolation, drastically reducing the impact of slow disks on storage node performance. The object-replicator and object-reconstructor need to see this setting too, so it must be in the [DEFAULT] section. |
reclaim_age = 604800 |
Time elapsed in seconds before an object can be reclaimed |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
report_interval = 300 |
Interval in seconds between reports. |
Configuration option = Default value | Description |
---|---|
pipeline = catch_errors proxy-logging cache proxy-server |
Pipeline to use for processing operations. |
[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are realtime, best-effort and idle. I/O niceness
# priority is a number which goes from 0 to 7. The higher the value, the lower
# the I/O priority of the process. Work only with ionice_class.
# ionice_class =
# ionice_priority =
[object-expirer]
# interval = 300
# auto_create_account_prefix = .
# expiring_objects_account_name = expiring_objects
# report_interval = 300
# concurrency is the level of concurrency o use to do the work, this value
# must be set to at least 1
# concurrency = 1
# processes is how many parts to divide the work into, one part per process
# that will be doing the work
# processes set 0 means that a single process will be doing all the work
# processes can also be specified on the command line and will override the
# config value
# processes = 0
# process is which of the parts a particular process will work on
# process can also be specified on the command line and will override the config
# value
# process is "zero based", if you want to use 3 processes, you should run
# processes with process set to 0, 1, and 2
# process = 0
# The expirer will re-attempt expiring if the source object is not available
# up to reclaim_age seconds before it gives up and deletes the entry in the
# queue.
# reclaim_age = 604800
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are realtime, best-effort and idle. I/O niceness
# priority is a number which goes from 0 to 7. The higher the value, the lower
# the I/O priority of the process. Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options
[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options
[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
[filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port = 514
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host =
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to 12, only the first 12 characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# 12 or so characters is unique enough that you can trace/debug
# token usage. Set to 0 to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix = 16
#
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the <verb> portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
Find an example container server configuration at
etc/container-server.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
allowed_sync_hosts = 127.0.0.1 |
The list of hosts that are allowed to send syncs to. |
backlog = 4096 |
Maximum number of allowed pending TCP connections |
bind_ip = 0.0.0.0 |
IP Address for server to bind to |
bind_port = 6001 |
Port for server to bind to |
bind_timeout = 30 |
Seconds to attempt bind before giving up |
db_preallocation = off |
If you don’t mind the extra disk space usage in overhead, you can turn this on to preallocate disk space with SQLite databases to decrease fragmentation. underlying filesystem does not support it. to setup custom log handlers. bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be |
devices = /srv/node |
Parent directory of where devices are mounted |
disable_fallocate = false |
Disable “fast fail” fallocate checks if the underlying filesystem does not support it. |
eventlet_debug = false |
If true, turn on debug logging for eventlet |
fallocate_reserve = 0 |
You can set fallocate_reserve to the number of bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be |
log_address = /dev/log |
Location where syslog sends the logs to |
log_custom_handlers = |
Comma-separated list of functions to call to setup custom log handlers. |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_max_line_length = 0 |
Caps the length of log lines to the value given; no limit if set to 0, the default. |
log_name = swift |
Label used when logging |
log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. |
log_statsd_host = localhost |
If not set, the StatsD feature is disabled. |
log_statsd_metric_prefix = |
Value will be prepended to every metric sent to the StatsD server. |
log_statsd_port = 8125 |
Port value for the StatsD server. |
log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. |
log_udp_host = |
If not set, the UDP receiver for syslog is disabled. |
log_udp_port = 514 |
Port value for UDP receiver, if enabled. |
max_clients = 1024 |
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing other workers a chance to process it. |
mount_check = true |
Whether or not check if the devices are mounted to prevent accidentally writing to the root device |
swift_dir = /etc/swift |
Swift configuration directory |
user = swift |
User to run as |
workers = auto |
a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. |
Configuration option = Default value | Description |
---|---|
allow_versions = false |
Enable/Disable object versioning feature |
auto_create_account_prefix = . |
Prefix to use when automatically creating accounts |
conn_timeout = 0.5 |
Connection timeout to external services |
node_timeout = 3 |
Request timeout to external services |
replication_server = false |
If defined, tells server how to handle replication verbs in requests. When set to True (or 1), only replication verbs will be accepted. When set to False, replication verbs will be rejected. When undefined, server will accept any verb in the request. |
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_level = INFO |
Log level |
set log_name = container-server |
Label to use when logging |
set log_requests = true |
Whether or not to log requests |
use = egg:swift#container |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
pipeline = healthcheck recon container-server |
Pipeline to use for processing operations. |
Configuration option = Default value | Description |
---|---|
concurrency = 8 |
Number of replication workers to spawn |
conn_timeout = 0.5 |
Connection timeout to external services |
interval = 30 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = container-replicator |
Label used when logging |
max_diffs = 100 |
Caps how long the replicator spends trying to sync a database per pass |
node_timeout = 10 |
Request timeout to external services |
per_diff = 1000 |
Limit number of items to get per diff |
reclaim_age = 604800 |
Time elapsed in seconds before an object can be reclaimed |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
rsync_compress = no |
Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one. |
rsync_module = {replication_ip}::container |
Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default). |
run_pause = 30 |
Time in seconds to wait between replication passes |
Configuration option = Default value | Description |
---|---|
account_suppression_time = 60 |
Seconds to suppress updating an account that has generated an error (timeout, not yet found, etc.) |
concurrency = 4 |
Number of replication workers to spawn |
conn_timeout = 0.5 |
Connection timeout to external services |
interval = 300 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = container-updater |
Label used when logging |
node_timeout = 3 |
Request timeout to external services |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
slowdown = 0.01 |
Time in seconds to wait between objects |
Configuration option = Default value | Description |
---|---|
containers_per_second = 200 |
Maximum containers audited per second. Should be tuned according to individual system specs. 0 is unlimited. mounted to prevent accidentally writing to the root device process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. By increasing the number of workers to a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. |
interval = 1800 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = container-auditor |
Label used when logging |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
Configuration option = Default value | Description |
---|---|
conn_timeout = 5 |
Connection timeout to external services |
container_time = 60 |
Maximum amount of time to spend syncing each container |
internal_client_conf_path = /etc/swift/internal-client.conf |
Internal client config file path |
interval = 300 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = container-sync |
Label used when logging |
request_tries = 3 |
Server errors from requests will be retried by default |
sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888 |
If you need to use an HTTP proxy, set it here. Defaults to no proxy. |
Configuration option = Default value | Description |
---|---|
disable_path = |
An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE” |
use = egg:swift#healthcheck |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
use = egg:swift#recon |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
dump_interval = 5.0 |
the profile data will be dumped to local disk based on above naming rule in this interval (seconds). |
dump_timestamp = false |
Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory. |
flush_at_shutdown = false |
Clears the data when the wsgi server shutdowns. |
log_filename_prefix = /tmp/log/swift/profile/default.profile |
This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/object.profile |
path = /__profile__ |
This is the path of the URL to access the mini web UI. |
profile_module = eventlet.green.profile |
This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc. |
unwind = false |
unwind the iterator of applications |
use = egg:swift#xprofile |
Entry point of paste.deploy in the server |
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6201
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# This is a comma separated list of hosts allowed in the X-Container-Sync-To
# field for containers. This is the old-style of using container sync. It is
# strongly recommended to use the new style of a separate
# container-sync-realms.conf -- see container-sync-realms.conf-sample
# allowed_sync_hosts = 127.0.0.1
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = healthcheck recon container-server
[app:container-server]
use = egg:swift#container
# You can override the default log routing for this app here:
# set log_name = container-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# node_timeout = 3
# conn_timeout = 0.5
# allow_versions = false
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =
[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift
[container-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::container
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300
# concurrency = 4
# node_timeout = 3
# conn_timeout = 0.5
#
# slowdown will sleep that amount between containers
# slowdown = 0.01
#
# Seconds to suppress updating an account that has generated an error
# account_suppression_time = 60
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each container at most once per interval
# interval = 1800
#
# containers_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-sync]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-sync
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
# You can also set this to a comma separated list of HTTP Proxies and they will
# be randomly used (simple load balancing).
# sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888
#
# Will sync each container at most once per interval
# interval = 300
#
# Maximum amount of time to spend syncing each container per pass
# container_time = 60
#
# Maximum amount of time in seconds for the connection attempt
# conn_timeout = 5
# Server errors from requests will be retried by default
# request_tries = 3
#
# Internal client config file path
# internal_client_conf_path = /etc/swift/internal-client.conf
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file. Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/container.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
Find an example container sync realms configuration at
etc/container-sync-realms.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
mtime_check_interval = 300 |
The number of seconds between checking the modified time of this config file for changes and therefore reloading it. |
Configuration option = Default value | Description |
---|---|
cluster_clustername1 = https://host1/v1/ |
Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive. |
cluster_clustername2 = https://host2/v1/ |
Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive. |
key = realm1key |
The key is the overall cluster-to-cluster key used in combination with the external users’ key that they set on their containers’ X-Container-Sync-Key metadata header values. These keys will be used to sign each request the container sync daemon makes and used to validate each incoming container sync request. |
key2 = realm1key2 |
The key2 is optional and is an additional key incoming requests will be checked against. This is so you can rotate keys if you wish; you move the existing key to key2 and make a new key value. |
Configuration option = Default value | Description |
---|---|
cluster_clustername3 = https://host3/v1/ |
Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive. |
cluster_clustername4 = https://host4/v1/ |
Any values in the realm section whose names begin with cluster_ will indicate the name and endpoint of a cluster and will be used by external users in their containers’ X-Container-Sync-To metadata header values with the format “realm_name/cluster_name/container_name”. Realm and cluster names are considered case insensitive. |
key = realm2key |
The key is the overall cluster-to-cluster key used in combination with the external users’ key that they set on their containers’ X-Container-Sync-Key metadata header values. These keys will be used to sign each request the container sync daemon makes and used to validate each incoming container sync request. |
key2 = realm2key2 |
The key2 is optional and is an additional key incoming requests will be checked against. This is so you can rotate keys if you wish; you move the existing key to key2 and make a new key value. |
# [DEFAULT]
# The number of seconds between checking the modified time of this config file
# for changes and therefore reloading it.
# mtime_check_interval = 300
# [realm1]
# key = realm1key
# key2 = realm1key2
# cluster_clustername1 = https://host1/v1/
# cluster_clustername2 = https://host2/v1/
#
# [realm2]
# key = realm2key
# key2 = realm2key2
# cluster_clustername3 = https://host3/v1/
# cluster_clustername4 = https://host4/v1/
# Each section name is the name of a sync realm. A sync realm is a set of
# clusters that have agreed to allow container syncing with each other. Realm
# names will be considered case insensitive.
#
# The key is the overall cluster-to-cluster key used in combination with the
# external users' key that they set on their containers' X-Container-Sync-Key
# metadata header values. These keys will be used to sign each request the
# container sync daemon makes and used to validate each incoming container sync
# request.
#
# The key2 is optional and is an additional key incoming requests will be
# checked against. This is so you can rotate keys if you wish; you move the
# existing key to key2 and make a new key value.
#
# Any values in the realm section whose names begin with cluster_ will indicate
# the name and endpoint of a cluster and will be used by external users in
# their containers' X-Container-Sync-To metadata header values with the format
# "realm_name/cluster_name/container_name". Realm and cluster names are
# considered case insensitive.
#
# The endpoint is what the container sync daemon will use when sending out
# requests to that cluster. Keep in mind this endpoint must be reachable by all
# container servers, since that is where the container sync daemon runs. Note
# that the endpoint ends with /v1/ and that the container sync daemon will then
# add the account/container/obj name after that.
#
# Distribute this container-sync-realms.conf file to all your proxy servers
# and container servers.
Find an example container sync realms configuration at
etc/container-reconciler.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
log_address = /dev/log |
Location where syslog sends the logs to |
log_custom_handlers = |
Comma-separated list of functions to call to setup custom log handlers. |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = swift |
Label used when logging |
log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. |
log_statsd_host = localhost |
If not set, the StatsD feature is disabled. |
log_statsd_metric_prefix = |
Value will be prepended to every metric sent to the StatsD server. |
log_statsd_port = 8125 |
Port value for the StatsD server. |
log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. |
log_udp_host = |
If not set, the UDP receiver for syslog is disabled. |
log_udp_port = 514 |
Port value for UDP receiver, if enabled. |
swift_dir = /etc/swift |
Swift configuration directory |
user = swift |
User to run as |
Configuration option = Default value | Description |
---|---|
use = egg:swift#proxy |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
interval = 30 |
Minimum time for a pass to take |
reclaim_age = 604800 |
Time elapsed in seconds before an object can be reclaimed |
request_tries = 3 |
Server errors from requests will be retried by default |
Configuration option = Default value | Description |
---|---|
use = egg:swift#memcache |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
use = egg:swift#catch_errors |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
use = egg:swift#proxy_logging |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
pipeline = catch_errors proxy-logging cache proxy-server |
Pipeline to use for processing operations. |
[DEFAULT]
# swift_dir = /etc/swift
# user = swift
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[container-reconciler]
# The reconciler will re-attempt reconciliation if the source object is not
# available up to reclaim_age seconds before it gives up and deletes the entry
# in the queue.
# reclaim_age = 604800
# The cycle time of the daemon
# interval = 30
# Server errors from requests will be retried by default
# request_tries = 3
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = catch_errors proxy-logging cache proxy-server
[app:proxy-server]
use = egg:swift#proxy
# See proxy-server.conf-sample for options
[filter:cache]
use = egg:swift#memcache
# See proxy-server.conf-sample for options
[filter:proxy-logging]
use = egg:swift#proxy_logging
[filter:catch_errors]
use = egg:swift#catch_errors
# See proxy-server.conf-sample for options
Find an example account server configuration at
etc/account-server.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
backlog = 4096 |
Maximum number of allowed pending TCP connections |
bind_ip = 0.0.0.0 |
IP Address for server to bind to |
bind_port = 6002 |
Port for server to bind to |
bind_timeout = 30 |
Seconds to attempt bind before giving up |
db_preallocation = off |
If you don’t mind the extra disk space usage in overhead, you can turn this on to preallocate disk space with SQLite databases to decrease fragmentation. underlying filesystem does not support it. to setup custom log handlers. bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be |
devices = /srv/node |
Parent directory of where devices are mounted |
disable_fallocate = false |
Disable “fast fail” fallocate checks if the underlying filesystem does not support it. |
eventlet_debug = false |
If true, turn on debug logging for eventlet |
fallocate_reserve = 0 |
You can set fallocate_reserve to the number of bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be |
log_address = /dev/log |
Location where syslog sends the logs to |
log_custom_handlers = |
Comma-separated list of functions to call to setup custom log handlers. |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_max_line_length = 0 |
Caps the length of log lines to the value given; no limit if set to 0, the default. |
log_name = swift |
Label used when logging |
log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. |
log_statsd_host = localhost |
If not set, the StatsD feature is disabled. |
log_statsd_metric_prefix = |
Value will be prepended to every metric sent to the StatsD server. |
log_statsd_port = 8125 |
Port value for the StatsD server. |
log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. |
log_udp_host = |
If not set, the UDP receiver for syslog is disabled. |
log_udp_port = 514 |
Port value for UDP receiver, if enabled. |
max_clients = 1024 |
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing other workers a chance to process it. |
mount_check = true |
Whether or not check if the devices are mounted to prevent accidentally writing to the root device |
swift_dir = /etc/swift |
Swift configuration directory |
user = swift |
User to run as |
workers = auto |
a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. |
Configuration option = Default value | Description |
---|---|
auto_create_account_prefix = . |
Prefix to use when automatically creating accounts |
replication_server = false |
If defined, tells server how to handle replication verbs in requests. When set to True (or 1), only replication verbs will be accepted. When set to False, replication verbs will be rejected. When undefined, server will accept any verb in the request. |
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_level = INFO |
Log level |
set log_name = account-server |
Label to use when logging |
set log_requests = true |
Whether or not to log requests |
use = egg:swift#account |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
pipeline = healthcheck recon account-server |
Pipeline to use for processing operations. |
Configuration option = Default value | Description |
---|---|
concurrency = 8 |
Number of replication workers to spawn |
conn_timeout = 0.5 |
Connection timeout to external services |
interval = 30 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = account-replicator |
Label used when logging |
max_diffs = 100 |
Caps how long the replicator spends trying to sync a database per pass |
node_timeout = 10 |
Request timeout to external services |
per_diff = 1000 |
Limit number of items to get per diff |
reclaim_age = 604800 |
Time elapsed in seconds before an object can be reclaimed |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
rsync_compress = no |
Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one. |
rsync_module = {replication_ip}::account |
Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default). |
run_pause = 30 |
Time in seconds to wait between replication passes |
Configuration option = Default value | Description |
---|---|
accounts_per_second = 200 |
Maximum accounts audited per second. Should be tuned according to individual system specs. 0 is unlimited. |
interval = 1800 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = account-auditor |
Label used when logging |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
Configuration option = Default value | Description |
---|---|
concurrency = 25 |
Number of replication workers to spawn |
conn_timeout = 0.5 |
Connection timeout to external services |
delay_reaping = 0 |
Normally, the reaper begins deleting account information for deleted accounts immediately; you can set this to delay its work however. The value is in seconds, 2592000 = 30 days, for example. bind to giving up worker can process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. By increasing the number of workers to a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. |
interval = 3600 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = account-reaper |
Label used when logging |
node_timeout = 10 |
Request timeout to external services |
reap_warn_after = 2592000 |
If the account fails to be reaped due to a persistent error, the account reaper will log a message such as: Account <name> has not been reaped since <date> You can search logs for this message if space is not being reclaimed after you delete account(s). This is in addition to any time requested by delay_reaping. |
Configuration option = Default value | Description |
---|---|
disable_path = |
An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE” |
use = egg:swift#healthcheck |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
use = egg:swift#recon |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
dump_interval = 5.0 |
the profile data will be dumped to local disk based on above naming rule in this interval (seconds). |
dump_timestamp = false |
Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory. |
flush_at_shutdown = false |
Clears the data when the wsgi server shutdowns. |
log_filename_prefix = /tmp/log/swift/profile/default.profile |
This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/accoutn.profile |
path = /__profile__ |
This is the path of the URL to access the mini web UI. |
profile_module = eventlet.green.profile |
This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc. |
unwind = false |
unwind the iterator of applications |
use = egg:swift#xprofile |
Entry point of paste.deploy in the server |
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6202
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = healthcheck recon account-server
[app:account-server]
use = egg:swift#account
# You can override the default log routing for this app here:
# set log_name = account-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server". Default is empty.
# replication_server = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =
[filter:recon]
use = egg:swift#recon
# recon_cache_path = /var/cache/swift
[account-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::account
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[account-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each account at most once per interval
# interval = 1800
#
# accounts_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[account-reaper]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-reaper
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# concurrency = 25
# interval = 3600
# node_timeout = 10
# conn_timeout = 0.5
#
# Normally, the reaper begins deleting account information for deleted accounts
# immediately; you can set this to delay its work however. The value is in
# seconds; 2592000 = 30 days for example.
# delay_reaping = 0
#
# If the account fails to be reaped due to a persistent error, the
# account reaper will log a message such as:
# Account <name> has not been reaped since <date>
# You can search logs for this message if space is not being reclaimed
# after you delete account(s).
# Default is 2592000 seconds (30 days). This is in addition to any time
# requested by delay_reaping.
# reap_warn_after = 2592000
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file. Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/account.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
Find an example proxy server configuration at
etc/proxy-server.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
account_autocreate = false |
If set to ‘true’ authorized accounts that do not yet exist within the Swift cluster will be automatically created. |
allow_account_management = false |
Whether account PUTs and DELETEs are even callable. |
auto_create_account_prefix = . |
Prefix to use when automatically creating accounts. |
client_chunk_size = 65536 |
Chunk size to read from clients. |
conn_timeout = 0.5 |
Connection timeout to external services. |
deny_host_headers = |
Comma separated list of Host headers to which the proxy will deny requests. |
error_suppression_interval = 60 |
Time in seconds that must elapse since the last error for a node to be considered no longer error limited. |
error_suppression_limit = 10 |
Error count to consider a node error limited. |
log_handoffs = true |
Log handoff requests if handoff logging is enabled and the handoff was not expected. We only log handoffs when we’ve pushed the handoff count further than we would normally have expected under normal circumstances, that is (request_node_count - num_primaries), when handoffs goes higher than that it means one of the primaries must have been skipped because of error limiting before we consumed all of our nodes_left. |
max_containers_per_account = 0 |
If set to a positive value, trying to create a container when the account already has at least this maximum containers will result in a 403 Forbidden. Note: This is a soft limit, meaning a user might exceed the cap for recheck_account_existence before the 403s kick in. |
max_containers_whitelist = |
is a comma separated list of account names that ignore the max_containers_per_account cap. |
node_timeout = 10 |
Request timeout to external services. |
object_chunk_size = 65536 |
Chunk size to read from object servers. |
object_post_as_copy = true |
Set object_post_as_copy = false to turn on fast posts where only the metadata changes are stored anew and the original data file is kept in place. This makes for quicker posts; but since the container metadata isn’t updated in this mode, features like container sync won’t be able to sync posts. |
post_quorum_timeout = 0.5 |
How long to wait for requests to finish after a quorum has been established. |
put_queue_depth = 10 |
Depth of the proxy put queue. |
read_affinity = r1z1=100, r1z2=200, r2=300 |
Which backend servers to prefer on reads. Format is r<N> for region N or r<N>z<M> for region N, zone M. The value after the equals is the priority; lower numbers are higher priority. Example: first read from region 1 zone 1, then region 1 zone 2, then anything in region 2, then everything else: read_affinity = r1z1=100, r1z2=200, r2=300 Default is empty, meaning no preference. |
recheck_account_existence = 60 |
Cache timeout in seconds to send memcached for account existence. |
recheck_container_existence = 60 |
Cache timeout in seconds to send memcached for container existence. |
recoverable_node_timeout = node_timeout |
Request timeout to external services for requests that, on failure, can be recovered from. For example, object GET. from a client external services. |
request_node_count = 2 * replicas |
replicas Set to the number of nodes to contact for a normal request. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request. conf file for values will only be shown to the list of swift_owners. The exact default definition of a swift_owner is headers> up to the auth system in use, but usually indicates administrative responsibilities. paste.deploy to use for auth. To use tempauth set to: |
set log_address = /dev/log |
Location where syslog sends the logs to. |
set log_facility = LOG_LOCAL0 |
Syslog log facility. |
set log_level = INFO |
Log level. |
set log_name = proxy-server |
Label to use when logging. |
sorting_method = shuffle |
Storage nodes can be chosen at random (shuffle), by using timing measurements (timing), or by using an explicit match (affinity). Using timing measurements may allow for lower overall latency, while using affinity allows for finer control. In both the timing and affinity cases, equally-sorting nodes are still randomly chosen to spread load. The valid values for sorting_method are “affinity”, “shuffle”, or “timing”. |
swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control |
These are the headers whose conf file for values will only be shown to the list of swift_owners. The exact default definition of a swift_owner is headers> up to the auth system in use, but usually indicates administrative responsibilities. paste.deploy to use for auth. To use tempauth set to: |
timing_expiry = 300 |
If the “timing” sorting_method is used, the timings will only be valid for the number of seconds configured by timing_expiry. |
use = egg:swift#proxy |
Entry point of paste.deploy in the server. |
write_affinity = r1, r2 |
This setting lets you trade data distribution for throughput. It makes the proxy server prefer local back-end servers for object PUT requests over non-local ones. Note that only object PUT requests are affected by the write_affinity setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT requests are not affected. The format is r<N> for region N or r<N>z<M> for region N, zone M. If this is set, then when handling an object PUT request, some number (see the write_affinity_node_count setting) of local backend servers will be tried before any nonlocal ones. Example: try to write to regions 1 and 2 before writing to any other nodes: write_affinity = r1, r2 |
write_affinity_node_count = 2 * replicas |
This setting is only useful in conjunction with write_affinity; it governs how many local object servers will be tried before falling back to non-local ones. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request: write_affinity_node_count = 2 * replicas |
Configuration option = Default value | Description |
---|---|
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server |
Pipeline to use for processing operations. |
Configuration option = Default value | Description |
---|---|
use = egg:swift#account_quotas |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
auth_plugin = password |
Authentication module to use. |
auth_uri = http://keystonehost:5000 |
auth_uri should point to a Keystone service from which users may retrieve tokens. This value is used in the WWW-Authenticate header that auth_token sends with any denial response. |
auth_url = http://keystonehost:35357 |
auth_url points to the Keystone Admin service. This information is used by the middleware to actually query Keystone about the validity of the authentication tokens. It is not necessary to append any Keystone API version number to this URI. |
cache = swift.cache |
cache is set to swift.cache . This means that the middleware will get the Swift memcache from the request environment. |
delay_auth_decision = False |
delay_auth_decision defaults to False, but leaving it as false will prevent other auth systems, staticweb, tempurl, formpost, and ACLs from working. This value must be explicitly set to True. |
include_service_catalog = False |
include_service_catalog defaults to True if not set. This means that when validating a token, the service catalog is retrieved and stored in the X-Service-Catalog header. Since Swift does not use the X-Service-Catalog header, there is no point in getting the service catalog. We recommend you set include_service_catalog to False. |
password = password |
Password for service user. |
paste.filter_factory = keystonemiddleware.auth_token:filter_factory |
Entry point of paste.filter_factory in the server. |
project_domain_id = default |
Service project domain. |
project_name = service |
Service project name. |
user_domain_id = default |
Service user domain. |
username = swift |
Service user name. |
Configuration option = Default value | Description |
---|---|
memcache_max_connections = 2 |
Max number of connections to each memcached server per worker services |
memcache_serialization_support = 2 |
Sets how memcache values are serialized and deserialized |
memcache_servers = 127.0.0.1:11211 |
Comma-separated list of memcached servers ip:port services |
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_headers = false |
If True, log headers in each request |
set log_level = INFO |
Log level |
set log_name = cache |
Label to use when logging |
use = egg:swift#memcache |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_headers = false |
If True, log headers in each request |
set log_level = INFO |
Log level |
set log_name = catch_errors |
Label to use when logging |
use = egg:swift#catch_errors |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
allow_full_urls = true |
Set this to false if you want to disallow any full URL values to be set for any new X-Container-Sync-To headers. This will keep any new full URLs from coming in, but won’t change any existing values already in the cluster. Updating those will have to be done manually, as knowing what the true realm endpoint should be cannot always be guessed. |
current = //REALM/CLUSTER |
Set this to specify this cluster //realm/cluster as “current” in /info. |
use = egg:swift#container_sync |
Entry point of paste.deploy in the server. |
Configuration option = Default value | Description |
---|---|
max_get_time = 86400 |
Time limit on GET requests (seconds). |
rate_limit_after_segment = 10 |
Rate limit the download of large object segments after this segment is downloaded. |
rate_limit_segments_per_sec = 1 |
Rate limit large object downloads at this rate. contact for a normal request. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request. paste.deploy to use for auth. To use tempauth set to: |
use = egg:swift#dlo |
Entry point of paste.deploy in the server. |
Configuration option = Default value | Description |
---|---|
allow_versioned_writes = false |
Enables using versioned writes middleware and exposing configuration settings via HTTP GET /info. Warning Setting this option bypasses the |
use = egg:swift#versioned_writes |
Entry point of paste.deploy in the server. |
Configuration option = Default value | Description |
---|---|
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_headers = false |
If True, log headers in each request |
set log_level = INFO |
Log level |
set log_name = gatekeeper |
Label to use when logging |
use = egg:swift#gatekeeper |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
disable_path = |
An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE”. |
use = egg:swift#healthcheck |
Entry point of paste.deploy in the server. |
Configuration option = Default value | Description |
---|---|
allow_names_in_acls = true |
The backwards compatible behavior can be disabled by setting this option to False. |
allow_overrides = true |
This option allows middleware higher in the WSGI pipeline to override auth processing, useful for middleware such as tempurl and formpost. If you know you are not going to use such middleware and you want a bit of extra security, you can set this to False. |
default_domain_id = default |
Name of the default domain. It is identified by its UUID, which by default has the value “default”. |
is_admin = false |
If this option is set to True, it allows to give a user whose username is the same as the project name and who has any role in the project access rights elevated to be the same as if the user had one of the operator_roles. Note that the condition compares names rather than UUIDs. This option is deprecated. It is False by default. |
operator_roles = admin, swiftoperator |
Operator role defines the user which is allowed to manage a tenant and create containers or give ACL to others. This parameter may be prefixed with an appropriate prefix. |
reseller_admin_role = ResellerAdmin |
The reseller admin role gives the ability to create and delete accounts. |
reseller_prefix = AUTH |
The naming scope for the auth service. |
service_roles = |
When present, this option requires that the X-Service-Token header supplies a token from a user who has a role listed in service_roles. This parameter may be prefixed with an appropriate prefix. |
use = egg:swift#keystoneauth |
Entry point of paste.deploy in the server. |
Configuration option = Default value | Description |
---|---|
list_endpoints_path = /endpoints/ |
Path to list endpoints for an object, account or container. |
use = egg:swift#list_endpoints |
Entry point of paste.deploy in the server. |
Configuration option = Default value | Description |
---|---|
access_log_address = /dev/log |
Location where syslog sends the logs to. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_facility = LOG_LOCAL0 |
Syslog facility to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_headers = false |
Header to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_headers_only = |
If access_log_headers is True and access_log_headers_only is set only these headers are logged. Multiple headers can be defined as comma separated list like this: access_log_headers_only = Host, X-Object-Meta-Mtime. |
access_log_level = INFO |
Syslog logging level to receive log lines. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_name = swift |
Label used when logging. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_host = localhost |
You can use log_statsd_* from [DEFAULT], or override them here. StatsD server. IPv4/IPv6 addresses and hostnames are supported. If a hostname resolves to an IPv4 and IPv6 address, the IPv4 address will be used. |
access_log_statsd_metric_prefix = |
Value will be prepended to every metric sent to the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_port = 8125 |
Port value for the StatsD server. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_udp_host = |
If not set, the UDP receiver for syslog is disabled. If not set, logging directives from [DEFAULT] without “access_” will be used. |
access_log_udp_port = 514 |
Port value for UDP receiver, if enabled. If not set, logging directives from [DEFAULT] without “access_” will be used. |
log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS |
What HTTP methods are allowed for StatsD logging (comma-sep). request methods not in this list will have “BAD_METHOD” for the <verb> portion of the metric. |
reveal_sensitive_prefix = 16 |
The X-Auth-Token is sensitive data. If revealed to an unauthorised person, they can now make requests against an account until the token expires. Set reveal_sensitive_prefix to the number of characters of the token that are logged. For example reveal_sensitive_prefix = 12 so only first 12 characters of the token are logged. Or, set to 0 to completely remove the token. Note reveal_sensitive_prefix will not affect the value logged with access_log_headers=True. |
use = egg:swift#proxy_logging |
Entry point of paste.deploy in the server. |
Configuration option = Default value | Description |
---|---|
allow_overrides = true |
This option allows middleware higher in the WSGI pipeline to override auth processing, useful for middleware such as tempurl and formpost. If you know you are not going to use such middleware and you want a bit of extra security, you can set this to False. |
auth_prefix = /auth/ |
The HTTP request path prefix for the auth service. Swift itself reserves anything beginning with the letter. |
require_group = |
The require_group parameter names a group that must be presented by either X-Auth-Token or X-Service-Token. Usually this parameter is used only with multiple reseller prefixes (for example, SERVICE_require_group=blah). By default, no group is needed. Do not use .admin. |
reseller_prefix = AUTH |
The naming scope for the auth service. |
set log_address = /dev/log |
Location where syslog sends the logs to. |
set log_facility = LOG_LOCAL0 |
Syslog log facility. |
set log_headers = false |
If True, log headers in each request. |
set log_level = INFO |
Log level. |
set log_name = tempauth |
Label to use when logging. |
storage_url_scheme = default |
Scheme to return with storage urls: http, https, or default (chooses based on what the server is running as) This can be useful with an SSL load balancer in front of a non-SSL server. |
token_life = 86400 |
The number of seconds a token is valid. |
use = egg:swift#tempauth |
Entry point of paste.deploy in the server. |
user_<account>_<user> = <key> [group] [group] [...] [storage_url] |
List of all the accounts and user you want. The following are example entries required for running the tests:
|
Configuration option = Default value | Description |
---|---|
dump_interval = 5.0 |
The profile data will be dumped to local disk based on above naming rule in this interval (seconds). |
dump_timestamp = false |
Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory. |
flush_at_shutdown = false |
Clears the data when the wsgi server shutdowns. |
log_filename_prefix = /tmp/log/swift/profile/default.profile |
This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/accoutn.profile. |
path = /__profile__ |
This is the path of the URL to access the mini web UI. |
profile_module = eventlet.green.profile |
This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc. |
unwind = false |
Unwind the iterator of applications. |
use = egg:swift#xprofile |
Entry point of paste.deploy in the server. |
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 8080
# bind_timeout = 30
# backlog = 4096
# swift_dir = /etc/swift
# user = swift
# Enables exposing configuration settings via HTTP GET /info.
# expose_info = true
# Key to use for admin calls that are HMAC signed. Default is empty,
# which will disable admin calls to /info.
# admin_key = secret_admin_key
#
# Allows the ability to withhold sections from showing up in the public calls
# to /info. You can withhold subsections by separating the dict level with a
# ".". The following would cause the sections 'container_quotas' and 'tempurl'
# to not be listed, and the key max_failed_deletes would be removed from
# bulk_delete. Default value is 'swift.valid_api_versions' which allows all
# registered features to be listed via HTTP GET /info except
# swift.valid_api_versions information
# disallowed_sections = swift.valid_api_versions, container_quotas, tempurl
# Use an integer to override the number of pre-forked processes that will
# accept connections. Should default to the number of effective cpu
# cores in the system. It's worth noting that individual workers will
# use many eventlet co-routines to service multiple concurrent requests.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# Set the following two lines to enable SSL. This is for testing only.
# cert_file = /etc/swift/proxy.crt
# key_file = /etc/swift/proxy.key
#
# expiring_objects_container_divisor = 86400
# expiring_objects_account_name = expiring_objects
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_headers = false
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# This optional suffix (default is empty) that would be appended to the swift transaction
# id allows one to easily figure out from which cluster that X-Trans-Id belongs to.
# This is very useful when one is managing more than one swift cluster.
# trans_id_suffix =
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
# cors_allow_origin =
# strict_cors_mode = True
#
# client_timeout = 60
# eventlet_debug = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
# This sample pipeline uses tempauth and is used for SAIO dev work and
# testing. See below for a pipeline using keystone.
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit tempauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
# The following pipeline shows keystone integration. Comment out the one
# above and uncomment this one. Additional steps for integrating keystone are
# covered further below in the filter sections for authtoken and keystoneauth.
#pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystoneauth copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
# You can override the default log routing for this app here:
# set log_name = proxy-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_address = /dev/log
#
# log_handoffs = true
# recheck_account_existence = 60
# recheck_container_existence = 60
# object_chunk_size = 65536
# client_chunk_size = 65536
#
# How long the proxy server will wait on responses from the a/c/o servers.
# node_timeout = 10
#
# How long the proxy server will wait for an initial response and to read a
# chunk of data from the object servers while serving GET / HEAD requests.
# Timeouts from these requests can be recovered from so setting this to
# something lower than node_timeout would provide quicker error recovery
# while allowing for a longer timeout for non-recoverable requests (PUTs).
# Defaults to node_timeout, should be overriden if node_timeout is set to a
# high number to prevent client timeouts from firing before the proxy server
# has a chance to retry.
# recoverable_node_timeout = node_timeout
#
# conn_timeout = 0.5
#
# How long to wait for requests to finish after a quorum has been established.
# post_quorum_timeout = 0.5
#
# How long without an error before a node's error count is reset. This will
# also be how long before a node is reenabled after suppression is triggered.
# error_suppression_interval = 60
#
# How many errors can accumulate before a node is temporarily ignored.
# error_suppression_limit = 10
#
# If set to 'true' any authorized user may create and delete accounts; if
# 'false' no one, even authorized, can.
# allow_account_management = false
#
# If set to 'true' authorized accounts that do not yet exist within the Swift
# cluster will be automatically created.
# account_autocreate = false
#
# If set to a positive value, trying to create a container when the account
# already has at least this maximum containers will result in a 403 Forbidden.
# Note: This is a soft limit, meaning a user might exceed the cap for
# recheck_account_existence before the 403s kick in.
# max_containers_per_account = 0
#
# This is a comma separated list of account hashes that ignore the
# max_containers_per_account cap.
# max_containers_whitelist =
#
# Comma separated list of Host headers to which the proxy will deny requests.
# deny_host_headers =
#
# Prefix used when automatically creating accounts.
# auto_create_account_prefix = .
#
# Depth of the proxy put queue.
# put_queue_depth = 10
#
# Storage nodes can be chosen at random (shuffle), by using timing
# measurements (timing), or by using an explicit match (affinity).
# Using timing measurements may allow for lower overall latency, while
# using affinity allows for finer control. In both the timing and
# affinity cases, equally-sorting nodes are still randomly chosen to
# spread load.
# The valid values for sorting_method are "affinity", "shuffle", or "timing".
# sorting_method = shuffle
#
# If the "timing" sorting_method is used, the timings will only be valid for
# the number of seconds configured by timing_expiry.
# timing_expiry = 300
#
# By default on a GET/HEAD swift will connect to a storage node one at a time
# in a single thread. There is smarts in the order they are hit however. If you
# turn on concurrent_gets below, then replica count threads will be used.
# With addition of the concurrency_timeout option this will allow swift to send
# out GET/HEAD requests to the storage nodes concurrently and answer with the
# first to respond. With an EC policy the parameter only affects HEAD requests.
# concurrent_gets = off
#
# This parameter controls how long to wait before firing off the next
# concurrent_get thread. A value of 0 would be fully concurrent, any other
# number will stagger the firing of the threads. This number should be
# between 0 and node_timeout. The default is what ever you set for the
# conn_timeout parameter.
# concurrency_timeout = 0.5
#
# Set to the number of nodes to contact for a normal request. You can use
# '* replicas' at the end to have it use the number given times the number of
# replicas for the ring being used for the request.
# request_node_count = 2 * replicas
#
# Which backend servers to prefer on reads. Format is r<N> for region
# N or r<N>z<M> for region N, zone M. The value after the equals is
# the priority; lower numbers are higher priority.
#
# Example: first read from region 1 zone 1, then region 1 zone 2, then
# anything in region 2, then everything else:
# read_affinity = r1z1=100, r1z2=200, r2=300
# Default is empty, meaning no preference.
# read_affinity =
#
# Which backend servers to prefer on writes. Format is r<N> for region
# N or r<N>z<M> for region N, zone M. If this is set, then when
# handling an object PUT request, some number (see setting
# write_affinity_node_count) of local backend servers will be tried
# before any nonlocal ones.
#
# Example: try to write to regions 1 and 2 before writing to any other
# nodes:
# write_affinity = r1, r2
# Default is empty, meaning no preference.
# write_affinity =
#
# The number of local (as governed by the write_affinity setting)
# nodes to attempt to contact first, before any non-local ones. You
# can use '* replicas' at the end to have it use the number given
# times the number of replicas for the ring being used for the
# request.
# write_affinity_node_count = 2 * replicas
#
# These are the headers whose values will only be shown to swift_owners. The
# exact definition of a swift_owner is up to the auth system in use, but
# usually indicates administrative responsibilities.
# swift_owner_headers = x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[filter:tempauth]
use = egg:swift#tempauth
# You can override the default log routing for this filter here:
# set log_name = tempauth
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# The reseller prefix will verify a token begins with this prefix before even
# attempting to validate it. Also, with authorization, only Swift storage
# accounts with this prefix will be authorized by this middleware. Useful if
# multiple auth systems are in use for one Swift cluster.
# The reseller_prefix may contain a comma separated list of items. The first
# item is used for the token as mentioned above. If second and subsequent
# items exist, the middleware will handle authorization for an account with
# that prefix. For example, for prefixes "AUTH, SERVICE", a path of
# /v1/SERVICE_account is handled the same as /v1/AUTH_account. If an empty
# (blank) reseller prefix is required, it must be first in the list. Two
# single quote characters indicates an empty (blank) reseller prefix.
# reseller_prefix = AUTH
#
# The require_group parameter names a group that must be presented by
# either X-Auth-Token or X-Service-Token. Usually this parameter is
# used only with multiple reseller prefixes (e.g., SERVICE_require_group=blah).
# By default, no group is needed. Do not use .admin.
# require_group =
# The auth prefix will cause requests beginning with this prefix to be routed
# to the auth subsystem, for granting tokens, etc.
# auth_prefix = /auth/
# token_life = 86400
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# This specifies what scheme to return with storage urls:
# http, https, or default (chooses based on what the server is running as)
# This can be useful with an SSL load balancer in front of a non-SSL server.
# storage_url_scheme = default
#
# Lastly, you need to list all the accounts/users you want here. The format is:
# user_<account>_<user> = <key> [group] [group] [...] [storage_url]
# or if you want underscores in <account> or <user>, you can base64 encode them
# (with no equal signs) and use this format:
# user64_<account_b64>_<user_b64> = <key> [group] [group] [...] [storage_url]
# There are special groups of:
# .reseller_admin = can do anything to any account for this auth
# .admin = can do anything within the account
# If neither of these groups are specified, the user can only access containers
# that have been explicitly allowed for them by a .admin or .reseller_admin.
# The trailing optional storage_url allows you to specify an alternate url to
# hand back to the user upon authentication. If not specified, this defaults to
# $HOST/v1/<reseller_prefix>_<account> where $HOST will do its best to resolve
# to what the requester would need to use to reach this host.
# Here are example entries, required for running the tests:
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service
# To enable Keystone authentication you need to have the auth token
# middleware first to be configured. Here is an example below, please
# refer to the keystone's documentation for details about the
# different settings.
#
# You'll also need to have the keystoneauth middleware enabled and have it in
# your main pipeline, as show in the sample pipeline at the top of this file.
#
# Following parameters are known to work with keystonemiddleware v2.3.0
# (above v2.0.0), but checking the latest information in the wiki page[1]
# is recommended.
# 1. http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html#configuration
#
# [filter:authtoken]
# paste.filter_factory = keystonemiddleware.auth_token:filter_factory
# auth_uri = http://keystonehost:5000
# auth_url = http://keystonehost:35357
# auth_plugin = password
# project_domain_id = default
# user_domain_id = default
# project_name = service
# username = swift
# password = password
#
# delay_auth_decision defaults to False, but leaving it as false will
# prevent other auth systems, staticweb, tempurl, formpost, and ACLs from
# working. This value must be explicitly set to True.
# delay_auth_decision = False
#
# cache = swift.cache
# include_service_catalog = False
#
# [filter:keystoneauth]
# use = egg:swift#keystoneauth
# The reseller_prefix option lists account namespaces that this middleware is
# responsible for. The prefix is placed before the Keystone project id.
# For example, for project 12345678, and prefix AUTH, the account is
# named AUTH_12345678 (i.e., path is /v1/AUTH_12345678/...).
# Several prefixes are allowed by specifying a comma-separated list
# as in: "reseller_prefix = AUTH, SERVICE". The empty string indicates a
# single blank/empty prefix. If an empty prefix is required in a list of
# prefixes, a value of '' (two single quote characters) indicates a
# blank/empty prefix. Except for the blank/empty prefix, an underscore ('_')
# character is appended to the value unless already present.
# reseller_prefix = AUTH
#
# The user must have at least one role named by operator_roles on a
# project in order to create, delete and modify containers and objects
# and to set and read privileged headers such as ACLs.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_operator_roles applies to the /v1/SERVICE_<project>
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
# operator_roles = admin, swiftoperator
#
# The reseller admin role has the ability to create and delete accounts
# reseller_admin_role = ResellerAdmin
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra security,
# you can set this to false.
# allow_overrides = true
#
# If the service_roles parameter is present, an X-Service-Token must be
# present in the request that when validated, grants at least one role listed
# in the parameter. The X-Service-Token may be scoped to any project.
# If there are several reseller prefix items, you can prefix the
# parameter so it applies only to those accounts (for example
# the parameter SERVICE_service_roles applies to the /v1/SERVICE_<project>
# path). If you omit the prefix, the option applies to all reseller
# prefix items. For the blank/empty prefix, prefix with '' (do not put
# underscore after the two single quote characters).
# By default, no service_roles are required.
# service_roles =
#
# For backwards compatibility, keystoneauth will match names in cross-tenant
# access control lists (ACLs) when both the requesting user and the tenant
# are in the default domain i.e the domain to which existing tenants are
# migrated. The default_domain_id value configured here should be the same as
# the value used during migration of tenants to keystone domains.
# default_domain_id = default
#
# For a new installation, or an installation in which keystone projects may
# move between domains, you should disable backwards compatible name matching
# in ACLs by setting allow_names_in_acls to false:
# allow_names_in_acls = true
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
# This facility may be used to temporarily remove a Swift node from a load
# balancer pool during maintenance or upgrade (remove the file to allow the
# node back into the load balancer pool).
# disable_path =
[filter:cache]
use = egg:swift#memcache
# You can override the default log routing for this filter here:
# set log_name = cache
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# If not set here, the value for memcache_servers will be read from
# memcache.conf (see memcache.conf-sample) or lacking that file, it will
# default to the value below. You can specify multiple servers separated with
# commas, as in: 10.1.2.3:11211,10.1.2.4:11211 (IPv6 addresses must
# follow rfc3986 section-3.2.2, i.e. [::1]:11211)
# memcache_servers = 127.0.0.1:11211
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
# If not set here, the value for memcache_serialization_support will be read
# from /etc/swift/memcache.conf (see memcache.conf-sample).
# To avoid an instant full cache flush, existing installations should
# upgrade with 0, then set to 1 and reload, then after some time (24 hours)
# set to 2 and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support = 2
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections = 2
#
# More options documented in memcache.conf-sample
[filter:ratelimit]
use = egg:swift#ratelimit
# You can override the default log routing for this filter here:
# set log_name = ratelimit
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# clock_accuracy should represent how accurate the proxy servers' system clocks
# are with each other. 1000 means that all the proxies' clock are accurate to
# each other within 1 millisecond. No ratelimit should be higher than the
# clock accuracy.
# clock_accuracy = 1000
#
# max_sleep_time_seconds = 60
#
# log_sleep_time_seconds of 0 means disabled
# log_sleep_time_seconds = 0
#
# allows for slow rates (e.g. running up to 5 sec's behind) to catch up.
# rate_buffer_seconds = 5
#
# account_ratelimit of 0 means disabled
# account_ratelimit = 0
# DEPRECATED- these will continue to work but will be replaced
# by the X-Account-Sysmeta-Global-Write-Ratelimit flag.
# Please see ratelimiting docs for details.
# these are comma separated lists of account names
# account_whitelist = a,b
# account_blacklist = c,d
# with container_limit_x = r
# for containers of size x limit write requests per second to r. The container
# rate will be linearly interpolated from the values given. With the values
# below, a container of size 5 will get a rate of 75.
# container_ratelimit_0 = 100
# container_ratelimit_10 = 50
# container_ratelimit_50 = 20
# Similarly to the above container-level write limits, the following will limit
# container GET (listing) requests.
# container_listing_ratelimit_0 = 100
# container_listing_ratelimit_10 = 50
# container_listing_ratelimit_50 = 20
[filter:domain_remap]
use = egg:swift#domain_remap
# You can override the default log routing for this filter here:
# set log_name = domain_remap
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# storage_domain = example.com
# path_root = v1
# Browsers can convert a host header to lowercase, so check that reseller
# prefix on the account is the correct case. This is done by comparing the
# items in the reseller_prefixes config option to the found prefix. If they
# match except for case, the item from reseller_prefixes will be used
# instead of the found reseller prefix. When none match, the default reseller
# prefix is used. When no default reseller prefix is configured, any request
# with an account prefix not in that list will be ignored by this middleware.
# reseller_prefixes = AUTH
# default_reseller_prefix =
[filter:catch_errors]
use = egg:swift#catch_errors
# You can override the default log routing for this filter here:
# set log_name = catch_errors
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
[filter:cname_lookup]
# Note: this middleware requires python-dnspython
use = egg:swift#cname_lookup
# You can override the default log routing for this filter here:
# set log_name = cname_lookup
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# Specify the storage_domain that match your cloud, multiple domains
# can be specified separated by a comma
# storage_domain = example.com
#
# lookup_depth = 1
# Note: Put staticweb just after your auth filter(s) in the pipeline
[filter:staticweb]
use = egg:swift#staticweb
# You can override the default log routing for this filter here:
# set log_name = staticweb
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
# Note: Put tempurl before dlo, slo and your auth filter(s) in the pipeline
[filter:tempurl]
use = egg:swift#tempurl
# The methods allowed with Temp URLs.
# methods = GET HEAD PUT POST DELETE
#
# The headers to remove from incoming requests. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. incoming_allow_headers is a list of exceptions to these
# removals.
# incoming_remove_headers = x-timestamp
#
# The headers allowed as exceptions to incoming_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# incoming_allow_headers =
#
# The headers to remove from outgoing responses. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. outgoing_allow_headers is a list of exceptions to these
# removals.
# outgoing_remove_headers = x-object-meta-*
#
# The headers allowed as exceptions to outgoing_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# outgoing_allow_headers = x-object-meta-public-*
# Note: Put formpost just before your auth filter(s) in the pipeline
[filter:formpost]
use = egg:swift#formpost
# Note: Just needs to be placed before the proxy-server in the pipeline.
[filter:name_check]
use = egg:swift#name_check
# forbidden_chars = '"`<>
# maximum_length = 255
# forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$
[filter:list-endpoints]
use = egg:swift#list_endpoints
# list_endpoints_path = /endpoints/
[filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port = 514
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host =
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to 12, only the first 12 characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# 12 or so characters is unique enough that you can trace/debug
# token usage. Set to 0 to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix = 16
#
# What HTTP methods are allowed for StatsD logging (comma-sep); request methods
# not in this list will have "BAD_METHOD" for the <verb> portion of the metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
#
# Note: The double proxy-logging in the pipeline is not a mistake. The
# left-most proxy-logging is there to log requests that were handled in
# middleware and never made it through to the right-most middleware (and
# proxy server). Double logging is prevented for normal requests. See
# proxy-logging docs.
# Note: Put before both ratelimit and auth in the pipeline.
[filter:bulk]
use = egg:swift#bulk
# max_containers_per_extraction = 10000
# max_failed_extractions = 1000
# max_deletes_per_request = 10000
# max_failed_deletes = 1000
#
# In order to keep a connection active during a potentially long bulk request,
# Swift may return whitespace prepended to the actual response body. This
# whitespace will be yielded no more than every yield_frequency seconds.
# yield_frequency = 10
#
# Note: The following parameter is used during a bulk delete of objects and
# their container. This would frequently fail because it is very likely
# that all replicated objects have not been deleted by the time the middleware got a
# successful response. It can be configured the number of retries. And the
# number of seconds to wait between each retry will be 1.5**retry
# delete_container_retry_count = 0
#
# To speed up the bulk delete process, multiple deletes may be executed in
# parallel. Avoid setting this too high, as it gives clients a force multiplier
# which may be used in DoS attacks. The suggested range is between 2 and 10.
# delete_concurrency = 2
# Note: Put after auth and staticweb in the pipeline.
[filter:slo]
use = egg:swift#slo
# max_manifest_segments = 1000
# max_manifest_size = 2097152
#
# Rate limiting applies only to segments smaller than this size (bytes).
# rate_limit_under_size = 1048576
#
# Start rate-limiting SLO segment serving after the Nth small segment of a
# segmented object.
# rate_limit_after_segment = 10
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. 0 means no rate-limiting.
# rate_limit_segments_per_sec = 1
#
# Time limit on GET requests (seconds)
# max_get_time = 86400
#
# When deleting with ?multipart-manifest=delete, multiple deletes may be
# executed in parallel. Avoid setting this too high, as it gives clients a
# force multiplier which may be used in DoS attacks. The suggested range is
# between 2 and 10.
# delete_concurrency = 2
# Note: Put after auth and staticweb in the pipeline.
# If you don't put it in the pipeline, it will be inserted for you.
[filter:dlo]
use = egg:swift#dlo
# Start rate-limiting DLO segment serving after the Nth segment of a
# segmented object.
# rate_limit_after_segment = 10
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. 0 means no rate-limiting.
# rate_limit_segments_per_sec = 1
#
# Time limit on GET requests (seconds)
# max_get_time = 86400
# Note: Put after auth in the pipeline.
[filter:container-quotas]
use = egg:swift#container_quotas
# Note: Put after auth in the pipeline.
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:gatekeeper]
use = egg:swift#gatekeeper
# Set this to false if you want to allow clients to set arbitrary X-Timestamps
# on uploaded objects. This may be used to preserve timestamps when migrating
# from a previous storage system, but risks allowing users to upload
# difficult-to-delete data.
# shunt_inbound_x_timestamp = true
#
# You can override the default log routing for this filter here:
# set log_name = gatekeeper
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
[filter:container_sync]
use = egg:swift#container_sync
# Set this to false if you want to disallow any full url values to be set for
# any new X-Container-Sync-To headers. This will keep any new full urls from
# coming in, but won't change any existing values already in the cluster.
# Updating those will have to be done manually, as knowing what the true realm
# endpoint should be cannot always be guessed.
# allow_full_urls = true
# Set this to specify this clusters //realm/cluster as "current" in /info
# current = //REALM/CLUSTER
# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after catch_errors, gatekeeper and healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file. Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/proxy.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
# Note: Put after slo, dlo in the pipeline.
# If you don't put it in the pipeline, it will be inserted automatically.
[filter:versioned_writes]
use = egg:swift#versioned_writes
# Enables using versioned writes middleware and exposing configuration
# settings via HTTP GET /info.
# WARNING: Setting this option bypasses the "allow_versions" option
# in the container configuration file, which will be eventually
# deprecated. See documentation for more details.
# allow_versioned_writes = false
# Note: Put after auth and before dlo and slo middlewares.
# If you don't put it in the pipeline, it will be inserted for you.
[filter:copy]
use = egg:swift#copy
# Set object_post_as_copy = false to turn on fast posts where only the metadata
# changes are stored anew and the original data file is kept in place. This
# makes for quicker posts.
# When object_post_as_copy is set to True, a POST request will be transformed
# into a COPY request where source and destination objects are the same.
# object_post_as_copy = true
# Note: To enable encryption, add the following 2 dependent pieces of crypto
# middleware to the proxy-server pipeline. They should be to the right of all
# other middleware apart from the final proxy-logging middleware, and in the
# order shown in this example:
# <other middleware> keymaster encryption proxy-logging proxy-server
[filter:keymaster]
use = egg:swift#keymaster
# Sets the root secret from which encryption keys are derived. This must be set
# before first use to a value that is a base64 encoding of at least 32 bytes.
# The security of all encrypted data critically depends on this key, therefore
# it should be set to a high-entropy value. For example, a suitable value may
# be obtained by base-64 encoding a 32 byte (or longer) value generated by a
# cryptographically secure random number generator. Changing the root secret is
# likely to result in data loss.
encryption_root_secret = changeme
# Sets the path from which the keymaster config options should be read. This
# allows multiple processes which need to be encryption-aware (for example,
# proxy-server and container-sync) to share the same config file, ensuring
# that the encryption keys used are the same. The format expected is similar
# to other config files, with a single [keymaster] section and a single
# encryption_root_secret option. If this option is set, the root secret
# MUST NOT be set in proxy-server.conf.
# keymaster_config_path =
[filter:encryption]
use = egg:swift#encryption
# By default all PUT or POST'ed object data and/or metadata will be encrypted.
# Encryption of new data and/or metadata may be disabled by setting
# disable_encryption to True. However, all encryption middleware should remain
# in the pipeline in order for existing encrypted data to be read.
# disable_encryption = False
You can find memcache configuration file examples for the proxy server
at etc/memcache.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
connect_timeout = 0.3 |
Timeout in seconds (float) for connection. |
io_timeout = 2.0 |
Timeout in seconds (float) for read and write. |
memcache_max_connections = 2 |
Max number of connections to each memcached server per worker services. |
memcache_serialization_support = 2 |
Sets how memcache values are serialized and deserialized. |
memcache_servers = 127.0.0.1:11211 |
Comma-separated list of memcached servers ip:port services. |
pool_timeout = 1.0 |
Timeout in seconds (float) for pooled connection. |
tries = 3 |
Number of servers to retry on failures getting a pooled connection. |
[memcache]
# You can use this single conf file instead of having memcache_servers set in
# several other conf files under [filter:cache] for example. You can specify
# multiple servers separated with commas, as in: 10.1.2.3:11211,10.1.2.4:11211
# (IPv6 addresses must follow rfc3986 section-3.2.2, i.e. [::1]:11211)
# memcache_servers = 127.0.0.1:11211
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
# To avoid an instant full cache flush, existing installations should
# upgrade with 0, then set to 1 and reload, then after some time (24 hours)
# set to 2 and reload.
# In the future, the ability to use pickle serialization will be removed.
# memcache_serialization_support = 2
#
# Sets the maximum number of connections to each memcached server per worker
# memcache_max_connections = 2
#
# Timeout for connection
# connect_timeout = 0.3
# Timeout for pooled connection
# pool_timeout = 1.0
# number of servers to retry on failures getting a pooled connection
# tries = 3
# Timeout for read and writes
# io_timeout = 2.0
Find an example rsyncd configuration at etc/rsyncd.conf-sample
in
the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
gid = swift |
Group ID for rsyncd. |
log file = /var/log/rsyncd.log |
Log file for rsyncd. |
pid file = /var/run/rsyncd.pid |
PID file for rsyncd. |
uid = swift |
User ID for rsyncd. |
max connections = |
Maximum number of connections for rsyncd. This option should be set for each account, container, or object. |
path = /srv/node |
Working directory for rsyncd to use. This option should be set for each account, container, or object. |
read only = false |
Set read only. This option should be set for each account, container, or object. |
lock file = |
Lock file for rsyncd. This option should be set for each account, container, or object. |
If rsync_module
includes the device, you can tune rsyncd to permit 4
connections per device instead of simply allowing 8 connections for all
devices:
rsync_module = {replication_ip}::object_{device}
If devices in your object ring are named sda, sdb, and sdc:
[object_sda]
max connections = 4
path = /srv/node
read only = false
lock file = /var/lock/object_sda.lock
[object_sdb]
max connections = 4
path = /srv/node
read only = false
lock file = /var/lock/object_sdb.lock
[object_sdc]
max connections = 4
path = /srv/node
read only = false
lock file = /var/lock/object_sdc.lock
To emulate the deprecated vm_test_mode = yes
option, set:
rsync_module = {replication_ip}::object{replication_port}
Therefore, on your SAIO, you have to set the following rsyncd configuration:
[object6010]
max connections = 25
path = /srv/1/node/
read only = false
lock file = /var/lock/object6010.lock
[object6020]
max connections = 25
path = /srv/2/node/
read only = false
lock file = /var/lock/object6020.lock
[object6030]
max connections = 25
path = /srv/3/node/
read only = false
lock file = /var/lock/object6030.lock
[object6040]
max connections = 25
path = /srv/4/node/
read only = false
lock file = /var/lock/object6040.lock
In OpenStack Object Storage, data is placed across different tiers of failure domains. First, data is spread across regions, then zones, then servers, and finally across drives. Data is placed to get the highest failure domain isolation. If you deploy multiple regions, the Object Storage service places the data across the regions. Within a region, each replica of the data should be stored in unique zones, if possible. If there is only one zone, data should be placed on different servers. And if there is only one server, data should be placed on different drives.
Regions are widely separated installations with a high-latency or otherwise constrained network link between them. Zones are arbitrarily assigned, and it is up to the administrator of the Object Storage cluster to choose an isolation level and attempt to maintain the isolation level through appropriate zone assignment. For example, a zone may be defined as a rack with a single power source. Or a zone may be a DC room with a common utility provider. Servers are identified by a unique IP/port. Drives are locally attached storage volumes identified by mount point.
In small clusters (five nodes or fewer), everything is normally in a single zone. Larger Object Storage deployments may assign zone designations differently; for example, an entire cabinet or rack of servers may be designated as a single zone to maintain replica availability if the cabinet becomes unavailable (for example, due to failure of the top of rack switches or a dedicated circuit). In very large deployments, such as service provider level deployments, each zone might have an entirely autonomous switching and power infrastructure, so that even the loss of an electrical circuit or switching aggregator would result in the loss of a single replica at most.
For ease of maintenance on OpenStack Object Storage, Rackspace recommends that you set up at least five nodes. Each node is assigned its own zone (for a total of five zones), which gives you host level redundancy. This enables you to take down a single zone for maintenance and still guarantee object availability in the event that another zone fails during your maintenance.
You could keep each server in its own cabinet to achieve cabinet level isolation, but you may wish to wait until your Object Storage service is better established before developing cabinet-level isolation. OpenStack Object Storage is flexible; if you later decide to change the isolation level, you can take down one zone at a time and move them to appropriate new homes.
OpenStack Object Storage does not require RAID. In fact, most RAID configurations cause significant performance degradation. The main reason for using a RAID controller is the battery-backed cache. It is very important for data integrity reasons that when the operating system confirms a write has been committed that the write has actually been committed to a persistent location. Most disks lie about hardware commits by default, instead writing to a faster write cache for performance reasons. In most cases, that write cache exists only in non-persistent memory. In the case of a loss of power, this data may never actually get committed to disk, resulting in discrepancies that the underlying file system must handle.
OpenStack Object Storage works best on the XFS file system, and this document
assumes that the hardware being used is configured appropriately to be mounted
with the nobarriers
option. For more information, see the XFS FAQ.
To get the most out of your hardware, it is essential that every disk used in OpenStack Object Storage is configured as a standalone, individual RAID 0 disk; in the case of 6 disks, you would have six RAID 0s or one JBOD. Some RAID controllers do not support JBOD or do not support battery backed cache with JBOD. To ensure the integrity of your data, you must ensure that the individual drive caches are disabled and the battery backed cache in your RAID card is configured and used. Failure to configure the controller properly in this case puts data at risk in the case of sudden loss of power.
You can also use hybrid drives or similar options for battery backed up cache configurations without a RAID controller.
Rate limiting in OpenStack Object Storage is implemented as a pluggable middleware that you configure on the proxy server. Rate limiting is performed on requests that result in database writes to the account and container SQLite databases. It uses memcached and is dependent on the proxy servers having highly synchronized time. The rate limits are limited by the accuracy of the proxy server clocks.
All configuration is optional. If no account or container limits are provided, no rate limiting occurs. Available configuration options include:
Configuration option = Default value | Description |
---|---|
account_blacklist = c,d |
Comma separated lists of account names that will not be allowed. Returns a 497 response. r: for containers of size x, limit requests per second to r. Will limit PUT, DELETE, and POST requests to /a/c/o. container_listing_ratelimit_x = r: for containers of size x, limit listing requests per second to r. Will limit GET requests to /a/c. |
account_ratelimit = 0 |
If set, will limit PUT and DELETE requests to /account_name/container_name. Number is in requests per second. |
account_whitelist = a,b |
Comma separated lists of account names that will not be rate limited. |
clock_accuracy = 1000 |
Represents how accurate the proxy servers’ system clocks are with each other. 1000 means that all the proxies’ clock are accurate to each other within 1 millisecond. No ratelimit should be higher than the clock accuracy. |
container_listing_ratelimit_0 = 100 |
with container_listing_ratelimit_x = r, for containers of size x, limit container GET (listing) requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75. |
container_listing_ratelimit_10 = 50 |
with container_listing_ratelimit_x = r, for containers of size x, limit container GET (listing) requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75. |
container_listing_ratelimit_50 = 20 |
with container_listing_ratelimit_x = r, for containers of size x, limit container GET (listing) requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75. |
container_ratelimit_0 = 100 |
with container_ratelimit_x = r, for containers of size x, limit write requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75. |
container_ratelimit_10 = 50 |
with container_ratelimit_x = r, for containers of size x, limit write requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75. |
container_ratelimit_50 = 20 |
with container_ratelimit_x = r, for containers of size x, limit write requests per second to r. The container rate will be linearly interpolated from the values given. With the default values, a container of size 5 will get a rate of 75. |
log_sleep_time_seconds = 0 |
To allow visibility into rate limiting set this value > 0 and all sleeps greater than the number will be logged. |
max_sleep_time_seconds = 60 |
App will immediately return a 498 response if the necessary sleep time ever exceeds the given max_sleep_time_seconds. |
rate_buffer_seconds = 5 |
Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger number will result in larger spikes in rate but better average accuracy. |
set log_address = /dev/log |
Location where syslog sends the logs to. |
set log_facility = LOG_LOCAL0 |
Syslog log facility. |
set log_headers = false |
If True, log headers in each request. |
set log_level = INFO |
Log level. |
set log_name = ratelimit |
Label to use when logging. |
use = egg:swift#ratelimit |
Entry point of paste.deploy in the server. |
The container rate limits are linearly interpolated from the values given. A sample container rate limiting could be:
container_ratelimit_100 = 100
container_ratelimit_200 = 50
container_ratelimit_500 = 20
This would result in:
Container Size | Rate Limit |
---|---|
0-99 | No limiting |
100 | 100 |
150 | 75 |
500 | 20 |
1000 | 20 |
Provides an easy way to monitor whether the Object Storage proxy server is
alive. If you access the proxy with the path /healthcheck
, it responds with
OK
in the response body, which monitoring tools can use.
Configuration option = Default value | Description |
---|---|
disable_path = |
An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE” |
use = egg:swift#healthcheck |
Entry point of paste.deploy in the server |
Middleware that translates container and account parts of a domain to path parameters that the proxy server understands.
Configuration option = Default value | Description |
---|---|
default_reseller_prefix = |
If the reseller prefixes do not match, the default reseller prefix is used. When no default reseller prefix is configured, any request with an account prefix not in that list will be ignored by this middleware. |
path_root = v1 |
Root path. |
reseller_prefixes = AUTH |
Browsers can convert a host header to lowercase, so check that reseller prefix on the account is the correct case. This is done by comparing the items in the reseller_prefixes config option to the found prefix. If they match except for case, the item from reseller_prefixes will be used instead of the found reseller prefix. |
set log_address = /dev/log |
Location where syslog sends the logs to. |
set log_facility = LOG_LOCAL0 |
Syslog log facility. |
set log_headers = false |
If True, log headers in each request. |
set log_level = INFO |
Log level. |
set log_name = domain_remap |
Label to use when logging. |
storage_domain = example.com |
Domain that matches your cloud. Multiple domains can be specified using a comma-separated list. |
use = egg:swift#domain_remap |
Entry point of paste.deploy in the server. |
Middleware that translates an unknown domain in the host header to
something that ends with the configured storage_domain
by looking up
the given domain’s CNAME record in DNS.
Configuration option = Default value | Description |
---|---|
lookup_depth = 1 |
Because CNAMES can be recursive, specifies the number of levels through which to search. |
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_headers = false |
If True, log headers in each request |
set log_level = INFO |
Log level |
set log_name = cname_lookup |
Label to use when logging |
storage_domain = example.com |
Domain that matches your cloud. Multiple domains can be specified using a comma-separated list. |
use = egg:swift#cname_lookup |
Entry point of paste.deploy in the server |
Allows the creation of URLs to provide temporary access to objects. For example, a website may wish to provide a link to download a large object in OpenStack Object Storage, but the Object Storage account has no public access. The website can generate a URL that provides GET access for a limited time to the resource. When the web browser user clicks on the link, the browser downloads the object directly from Object Storage, eliminating the need for the website to act as a proxy for the request. If the user shares the link with all his friends, or accidentally posts it on a forum, the direct access is limited to the expiration time set when the website created the link.
A temporary URL is the typical URL associated with an object, with two additional query parameters:
temp_url_sig
temp_url_expires
An example of a temporary URL:
https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485
To create temporary URLs, first set the X-Account-Meta-Temp-URL-Key
header
on your Object Storage account to an arbitrary string. This string serves as a
secret key. For example, to set a key of b3968d0207b54ece87cccc06515a89d4
by using the swift
command-line tool:
$ swift post -m "Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4"
Next, generate an HMAC-SHA1 (RFC 2104) signature to specify:
GET
or PUT
).X-Account-Meta-Temp-URL-Key
.Here is code generating the signature for a GET for 24 hours on
/v1/AUTH_account/container/object
:
import hmac
from hashlib import sha1
from time import time
method = 'GET'
duration_in_seconds = 60*60*24
expires = int(time() + duration_in_seconds)
path = '/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object'
key = 'mykey'
hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = hmac.new(key, hmac_body, sha1).hexdigest()
s = 'https://{host}/{path}?temp_url_sig={sig}&temp_url_expires={expires}'
url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=expires)
Any alteration of the resource path or query arguments results in a 401 Unauthorized error. Similarly, a PUT where GET was the allowed method returns a 401 error. HEAD is allowed if GET or PUT is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Object Storage.
Note
Changing the X-Account-Meta-Temp-URL-Key
invalidates any previously
generated temporary URLs within 60 seconds, which is the memcache time for
the key. Object Storage supports up to two keys, specified by
X-Account-Meta-Temp-URL-Key
and X-Account-Meta-Temp-URL-Key-2
.
Signatures are checked against both keys, if present. This process enables
key rotation without invalidating all existing temporary URLs.
Object Storage includes the swift-temp-url
script that generates the
query parameters automatically:
$ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey\
/v1/AUTH_account/container/object?\
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&\
temp_url_expires=1374497657
Because this command only returns the path, you must prefix the Object Storage
host name (for example, https://swift-cluster.example.com
).
With GET Temporary URLs, a Content-Disposition
header is set on the
response so that browsers interpret this as a file attachment to be saved. The
file name chosen is based on the object name, but you can override this with a
filename
query parameter. The following example specifies a filename of
My Test File.pdf
:
https://swift-cluster.example.com/v1/AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485&
filename=My+Test+File.pdf
If you do not want the object to be downloaded, you can cause
Content-Disposition: inline
to be set on the response by adding the
inline
parameter to the query string, as follows:
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=da39a3ee5e6b4b0d3255bfef95601890afd80709&
temp_url_expires=1323479485&inline
To enable Temporary URL functionality, edit /etc/swift/proxy-server.conf
to
add tempurl
to the pipeline
variable defined in the [pipeline:main]
section. The tempurl
entry should appear immediately before the
authentication filters in the pipeline, such as authtoken
, tempauth
or
keystoneauth
. For example:
[pipeline:main]
pipeline = healthcheck cache tempurl authtoken keystoneauth proxy-server
Configuration option = Default value | Description |
---|---|
incoming_allow_headers = |
Headers allowed as exceptions to incoming_remove_headers. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match. |
incoming_remove_headers = x-timestamp |
Headers to remove from incoming requests. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match. |
methods = GET HEAD PUT POST DELETE |
HTTP methods allowed with Temporary URLs. |
outgoing_allow_headers = x-object-meta-public-* |
Headers allowed as exceptions to outgoing_allow_headers. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match. |
outgoing_remove_headers = x-object-meta-* |
Headers to remove from outgoing responses. Simply a whitespace delimited list of header names and names can optionally end with ‘*’ to indicate a prefix match. |
use = egg:swift#tempurl |
Entry point of paste.deploy in the server. |
Name Check is a filter that disallows any paths that contain defined forbidden characters or that exceed a defined length.
Configuration option = Default value | Description |
---|---|
forbidden_chars = '"`<> |
Characters that are not allowed in a name |
forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$ |
Substrings to forbid, using regular expression syntax |
maximum_length = 255 |
Maximum length of a name |
use = egg:swift#name_check |
Entry point of paste.deploy in the server |
To change the OpenStack Object Storage internal limits, update the values in
the swift-constraints
section in the swift.conf
file. Use caution when
you update these values because they affect the performance in the entire
cluster.
Configuration option = Default value | Description |
---|---|
account_listing_limit = 10000 |
The default (and maximum) number of items returned for an account listing request. |
container_listing_limit = 10000 |
The default (and maximum) number of items returned for a container listing request. |
extra_header_count = 0 |
By default the maximum number of allowed headers depends on the number of max allowed metadata settings plus a default value of 32 for regular http headers. If for some reason this is not enough (custom middleware for example) it can be increased with the extra_header_count constraint. |
max_account_name_length = 256 |
The maximum number of bytes in the utf8 encoding of an account name. |
max_container_name_length = 256 |
The maximum number of bytes in the utf8 encoding of a container name. |
max_file_size = 5368709122 |
The largest normal object that can be saved in the cluster. This is also the limit on the size of each segment of a large object when using the large object manifest support. This value is set in bytes. Setting it to lower than 1MiB will cause some tests to fail. It is STRONGLY recommended to leave this value at the default (5 * 2**30 + 2). |
max_header_size = 8192 |
The max number of bytes in the utf8 encoding of each header. Using 8192 as default because eventlet use 8192 as maximum size of header line. You may need to increase this value when using identity v3 API tokens including more than 7 catalog entries. See also include_service_catalog in proxy-server.conf-sample (documented in overview_auth.rst). |
max_meta_count = 90 |
The max number of metadata keys that can be stored on a single account, container, or object. |
max_meta_name_length = 128 |
The max number of bytes in the utf8 encoding of the name portion of a metadata header. |
max_meta_overall_size = 4096 |
The max number of bytes in the utf8 encoding of the metadata (keys + values). |
max_meta_value_length = 256 |
The max number of bytes in the utf8 encoding of a metadata value. |
max_object_name_length = 1024 |
The max number of bytes in the utf8 encoding of an object name. |
valid_api_versions = v0,v1,v2 |
No help text available for this option. |
Use the swift-dispersion-report
tool to measure overall cluster health.
This tool checks if a set of deliberately distributed containers and objects
are currently in their proper places within the cluster. For instance, a common
deployment has three replicas of each object. The health of that object can be
measured by checking if each replica is in its proper place. If only 2 of the 3
is in place the object’s health can be said to be at 66.66%, where 100% would
be perfect. A single object’s health, especially an older object, usually
reflects the health of that entire partition the object is in. If you make
enough objects on a distinct percentage of the partitions in the cluster,you
get a good estimate of the overall cluster health.
In practice, about 1% partition coverage seems to balance well between accuracy
and the amount of time it takes to gather results. To provide this health
value, you must create an account solely for this usage. Next, you must place
the containers and objects throughout the system so that they are on distinct
partitions. Use the swift-dispersion-populate
tool to create random
container and object names until they fall on distinct partitions.
Last, and repeatedly for the life of the cluster, you must run the
swift-dispersion-report
tool to check the health of each container and
object.
These tools must have direct access to the entire cluster and ring files. Installing them on a proxy server suffices.
The swift-dispersion-populate
and swift-dispersion-report
commands both
use the same /etc/swift/dispersion.conf
configuration file. Example
dispersion.conf
file:
[dispersion]
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_key = testing
You can use configuration options to specify the dispersion coverage, which
defaults to 1%, retries, concurrency, and so on. However, the defaults are
usually fine. After the configuration is in place, run the
swift-dispersion-populate
tool to populate the containers and objects
throughout the cluster. Now that those containers and objects are in place, you
can run the swift-dispersion-report
tool to get a dispersion report or view
the overall health of the cluster. Here is an example of a cluster in perfect
health:
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Now, deliberately double the weight of a device in the object ring (with replication turned off) and re-run the dispersion report to show what impact that has:
$ swift-ring-builder object.builder set_weight d0 200
$ swift-ring-builder object.builder rebalance
...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 8s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
You can see the health of the objects in the cluster has gone down significantly. Of course, this test environment has just four devices, in a production environment with many devices the impact of one device change is much less. Next, run the replicators to get everything put back into place and then rerun the dispersion report:
# start object replicators and monitor logs until they're caught up ...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Alternatively, the dispersion report can also be output in JSON format. This allows it to be more easily consumed by third-party utilities:
$ swift-dispersion-report -j
{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}
Configuration option = Default value | Description |
---|---|
auth_key = testing |
No help text available for this option. |
auth_url = http://localhost:8080/auth/v1.0 |
Endpoint for auth server, such as keystone |
auth_user = test:tester |
Default user for dispersion in this context |
auth_version = 1.0 |
Indicates which version of auth |
concurrency = 25 |
Number of replication workers to spawn |
container_populate = yes |
No help text available for this option. |
container_report = yes |
No help text available for this option. |
dispersion_coverage = 1.0 |
No help text available for this option. |
dump_json = no |
No help text available for this option. |
endpoint_type = publicURL |
Indicates whether endpoint for auth is public or internal |
keystone_api_insecure = no |
Allow accessing insecure keystone server. The keystone’s certificate will not be verified. |
object_populate = yes |
No help text available for this option. |
object_report = yes |
No help text available for this option. |
project_domain_name = project_domain |
No help text available for this option. |
project_name = project |
No help text available for this option. |
retries = 5 |
No help text available for this option. |
swift_dir = /etc/swift |
Swift configuration directory |
user_domain_name = user_domain |
No help text available for this option. |
This feature is very similar to Dynamic Large Object (DLO) support in that it enables the user to upload many objects concurrently and afterwards download them as a single object. It is different in that it does not rely on eventually consistent container listings to do so. Instead, a user-defined manifest of the object segments is used.
For more information regarding SLO usage and support, please see: Static Large Objects.
Configuration option = Default value | Description |
---|---|
max_get_time = 86400 |
Time limit on GET requests (seconds) |
max_manifest_segments = 1000 |
Maximum number of segments. |
max_manifest_size = 2097152 |
Maximum size of segments. |
min_segment_size = 1048576 |
Minimum size of segments. |
rate_limit_after_segment = 10 |
Rate limit the download of large object segments after this segment is downloaded. |
rate_limit_segments_per_sec = 0 |
Rate limit large object downloads at this rate. contact for a normal request. You can use ‘* replicas’ at the end to have it use the number given times the number of replicas for the ring being used for the request. paste.deploy to use for auth. To use tempauth set to: |
use = egg:swift#slo |
Entry point of paste.deploy in the server. |
The container_quotas
middleware implements simple quotas that can be
imposed on Object Storage containers by a user with the ability to set
container metadata, most likely the account administrator. This can be useful
for limiting the scope of containers that are delegated to non-admin users,
exposed to form POST uploads, or just as a self-imposed sanity check.
Any object PUT operations that exceed these quotas return a Forbidden (403)
status code.
Quotas are subject to several limitations: eventual consistency, the timeliness of the cached container_info (60 second TTL by default), and it is unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers are refused).
Set quotas by adding meta values to the container. These values are validated when you set them:
X-Container-Meta-Quota-Bytes
X-Container-Meta-Quota-Count
The x-account-meta-quota-bytes
metadata entry must be requests (PUT, POST)
if a given account quota (in bytes) is exceeded while DELETE requests are still
allowed.
The x-account-meta-quota-bytes
metadata entry must be set to store and
enable the quota. Write requests to this metadata entry are only permitted for
resellers. There is no account quota limitation on a reseller account even if
x-account-meta-quota-bytes
is set.
Any object PUT operations that exceed the quota return a 413 response (request entity too large) with a descriptive body.
The following command uses an admin account that owns the Reseller role to set a quota on the test account:
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000
Here is the stat listing of an account where quota has been set:
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
Account: AUTH_test
Containers: 0
Objects: 0
Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a
This command removes the account quota:
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:
Use bulk-delete
to delete multiple files from an account with a single
request. Responds to DELETE requests with a header ‘X-Bulk-Delete:
true_value’. The body of the DELETE request is a new line-separated list of
files to delete. The files listed must be URL encoded and in the form:
/container_name/obj_name
If all files are successfully deleted (or did not exist), the operation returns
HTTPOk
. If any files failed to delete, the operation returns
HTTPBadGateway
. In both cases, the response body is a JSON dictionary that
shows the number of files that were successfully deleted or not found. The
files that failed are listed.
Configuration option = Default value | Description |
---|---|
delete_container_retry_count = 0 |
The parameter is used during a bulk delete of objects and their container. This would frequently fail because it is very likely that all replicated objects have not been deleted by the time the middleware got a successful response. It can be configured the number of retries. And the number of seconds to wait between each retry will be 1.5**retry. |
max_containers_per_extraction = 10000 |
The maximum numbers of containers per extraction. |
max_deletes_per_request = 10000 |
The maximum numbers of deletion per request. |
max_failed_deletes = 1000 |
The maximum number of tries to delete before failure. |
max_failed_extractions = 1000 |
The maximum number of tries to extract before failure. |
use = egg:swift#bulk |
Entry point of paste.deploy in the server. |
yield_frequency = 10 |
In order to keep a connection active during a potentially long bulk request, Swift may return whitespace prepended to the actual response body. This whitespace will be yielded no more than every yield_frequency seconds. |
The swift-drive-audit
configuration items reference a script that can be
run by using cron
to watch for bad drives. If errors are detected, it
unmounts the bad drive so that OpenStack Object Storage can work around it. It
takes the following options:
Configuration option = Default value | Description |
---|---|
device_dir = /srv/node |
Directory devices are mounted under |
error_limit = 1 |
Number of errors to find before a device is unmounted |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_file_pattern = /var/log/kern.*[!.][!g][!z] |
Location of the log file with globbing pattern to check against device errors locate device blocks with errors in the log file |
log_level = INFO |
Logging level |
log_max_line_length = 0 |
Caps the length of log lines to the value given; no limit if set to 0, the default. |
log_name = drive-audit |
Label used when logging |
log_to_console = False |
No help text available for this option. |
minutes = 60 |
Number of minutes to look back in |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
regex_pattern_1 = \berror\b.*\b(dm-[0-9]{1,2}\d?)\b |
No help text available for this option. |
unmount_failed_device = True |
No help text available for this option. |
Middleware that enables you to upload objects to a cluster by using an HTML form POST.
The format of the form is:
<form action="<swift-url>" method="POST"
enctype="multipart/form-data">
<input type="hidden" name="redirect" value="<redirect-url>" />
<input type="hidden" name="max_file_size" value="<bytes>" />
<input type="hidden" name="max_file_count" value="<count>" />
<input type="hidden" name="expires" value="<unix-timestamp>" />
<input type="hidden" name="signature" value="<hmac>" />
<input type="hidden" name="x_delete_at" value="<unix-timestamp>"/>
<input type="hidden" name="x_delete_after" value="<seconds>"/>
<input type="file" name="file1" /><br />
<input type="submit" />
</form>
In the form:
action="<swift-url>"
The URL to the Object Storage destination, such as https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix.
The name of each uploaded file is appended to the specified swift-url
.
So, you can upload directly to the root of container with a URL like
https://swift-cluster.example.com/v1/AUTH_account/container/.
Optionally, you can include an object prefix to separate different users’ uploads, such as https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix.
method="POST"
method
must be POST.enctype="multipart/form-data
enctype
must be set to multipart/form-data
.name="redirect"
"max_file_size exceeded"
.name="max_file_size"
name="max_file_count"
name="expires"
The expiration date and time for the form in UNIX Epoch time stamp format. After this date and time, the form is no longer valid.
For example, 1440619048
is equivalent to Mon, Wed, 26 Aug 2015
19:57:28 GMT
.
name="signature"
The HMAC-SHA1 signature of the form. This sample Python code shows how to compute the signature:
import hmac
from hashlib import sha1
from time import time
path = '/v1/account/container/object_prefix'
redirect = 'https://myserver.com/some-page'
max_file_size = 104857600
max_file_count = 10
expires = int(time() + 600)
key = 'mykey'
hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
max_file_size, max_file_count, expires)
signature = hmac.new(key, hmac_body, sha1).hexdigest()
The key is the value of the X-Account-Meta-Temp-URL-Key
header on the
account.
Use the full path from the /v1/
value and onward.
During testing, you can use the swift-form-signature
command-line tool
to compute the expires
and signature
values.
name="x_delete_at"
The date and time in UNIX Epoch time stamp format when the object will be removed.
For example, 1440619048
is equivalent to Mon, Wed, 26 Aug 2015
19:57:28 GMT
.
This attribute enables you to specify the X-Delete- At
header value in
the form POST.
name="x_delete_after"
X-Delete-At
metadata
item. This attribute enables you to specify the X-Delete-After
header
value in the form POST.type="file" name="filexx"
file
attribute, they are not sent with the sub- request because on the server
side, all attributes in the file cannot be parsed unless the whole file is
read into memory and the server does not have enough memory to service these
requests. So, attributes that follow the file
attribute are ignored.Configuration option = Default value | Description |
---|---|
use = egg:swift#formpost |
Entry point of paste.deploy in the server |
When configured, this middleware serves container data as a static web site with index file and error file resolution and optional file listings. This mode is normally only active for anonymous requests.
Configuration option = Default value | Description |
---|---|
use = egg:swift#staticweb |
Entry point of paste.deploy in the server |
The Swift3 middleware emulates the S3 REST API on top of Object Storage.
The following operations are currently supported:
To use this middleware, first download the latest version from its repository to your proxy servers.
$ git clone https://git.openstack.org/openstack/swift3
Then, install it using standard python mechanisms, such as:
# python setup.py install
Alternatively, if you have configured the Ubuntu Cloud Archive, you may use:
# apt-get install swift-plugin-s3
To add this middleware to your configuration, add the swift3 middleware in front of the swauth middleware, and before any other middleware that looks at Object Storage requests (like rate limiting).
Ensure that your proxy-server.conf
file contains swift3 in the pipeline and
the [filter:swift3]
section, as shown below:
[pipeline:main]
pipeline = catch_errors healthcheck cache swift3 swauth proxy-server
[filter:swift3]
use = egg:swift3#swift3
Next, configure the tool that you use to connect to the S3 API. For S3curl, for
example, you must add your host IP information by adding your host IP to the
@endpoints
array (line 33 in s3curl.pl
):
my @endpoints = ( '1.2.3.4');
Now you can send commands to the endpoint, such as:
$ ./s3curl.pl - 'a7811544507ebaf6c9a7a8804f47ea1c' \
-key 'a7d8e981-e296-d2ba-cb3b-db7dd23159bd' \
-get - -s -v http://1.2.3.4:8080
To set up your client, ensure you are using the ec2 credentials, which can be downloaded from the API Endpoints tab of the dashboard. The host should also point to the Object Storage node’s hostname. It also will have to use the old-style calling format, and not the hostname-based container format. Here is an example client setup using the Python boto library on a locally installed all-in-one Object Storage installation.
connection = boto.s3.Connection(
aws_access_key_id='a7811544507ebaf6c9a7a8804f47ea1c',
aws_secret_access_key='a7d8e981-e296-d2ba-cb3b-db7dd23159bd',
port=8080,
host='127.0.0.1',
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat())
The endpoint listing middleware enables third-party services that use data locality information to integrate with OpenStack Object Storage. This middleware reduces network overhead and is designed for third-party services that run inside the firewall. Deploy this middleware on a proxy server because usage of this middleware is not authenticated.
Format requests for endpoints, as follows:
/endpoints/{account}/{container}/{object}
/endpoints/{account}/{container}
/endpoints/{account}
Use the list_endpoints_path
configuration option in the
proxy_server.conf
file to customize the /endpoints/
path.
Responses are JSON-encoded lists of endpoints, as follows:
http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj}
http://{server}:{port}/{dev}/{part}/{acc}/{cont}
http://{server}:{port}/{dev}/{part}/{acc}
An example response is:
http://10.1.1.1:6000/sda1/2/a/c2/o1
http://10.1.1.1:6000/sda1/2/a/c2
http://10.1.1.1:6000/sda1/2/a
The Object Storage sends logs to the system logging facility only. By
default, all Object Storage log files to /var/log/swift/swift.log
,
using the local0, local1, and local2 syslog facilities.
There are no new, updated, and deprecated options in Mitaka for OpenStack Object Storage.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The following options allow configuration of the APIs that Orchestration supports. Currently this includes compatibility APIs for CloudFormation and CloudWatch and a native API.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
action_retry_limit = 5 |
(Integer) Number of times to retry to bring a resource to a non-error state. Set to 0 to disable retries. |
enable_stack_abandon = False |
(Boolean) Enable the preview Stack Abandon feature. |
enable_stack_adopt = False |
(Boolean) Enable the preview Stack Adopt feature. |
encrypt_parameters_and_properties = False |
(Boolean) Encrypt template parameters that were marked as hidden and also all the resource properties before storing them in database. |
heat_metadata_server_url = None |
(String) URL of the Heat metadata server. NOTE: Setting this is only needed if you require instances to use a different endpoint than in the keystone catalog |
heat_stack_user_role = heat_stack_user |
(String) Keystone role for heat template-defined users. |
heat_waitcondition_server_url = None |
(String) URL of the Heat waitcondition server. |
heat_watch_server_url = |
(String) URL of the Heat CloudWatch server. |
hidden_stack_tags = data-processing-cluster |
(List) Stacks containing these tag names will be hidden. Multiple tags should be given in a comma-delimited list (eg. hidden_stack_tags=hide_me,me_too). |
max_json_body_size = 1048576 |
(Integer) Maximum raw byte size of JSON request body. Should be larger than max_template_size. |
num_engine_workers = None |
(Integer) Number of heat-engine processes to fork and run. Will default to either to 4 or number of CPUs on the host, whichever is greater. |
observe_on_update = False |
(Boolean) On update, enables heat to collect existing resource properties from reality and converge to updated template. |
stack_action_timeout = 3600 |
(Integer) Timeout in seconds for stack action (ie. create or update). |
stack_domain_admin = None |
(String) Keystone username, a user with roles sufficient to manage users and projects in the stack_user_domain. |
stack_domain_admin_password = None |
(String) Keystone password for stack_domain_admin user. |
stack_scheduler_hints = False |
(Boolean) When this feature is enabled, scheduler hints identifying the heat stack context of a server or volume resource are passed to the configured schedulers in nova and cinder, for creates done using heat resource types OS::Cinder::Volume, OS::Nova::Server, and AWS::EC2::Instance. heat_root_stack_id will be set to the id of the root stack of the resource, heat_stack_id will be set to the id of the resource’s parent stack, heat_stack_name will be set to the name of the resource’s parent stack, heat_path_in_stack will be set to a list of comma delimited strings of stackresourcename and stackname with list[0] being ‘rootstackname’, heat_resource_name will be set to the resource’s name, and heat_resource_uuid will be set to the resource’s orchestration id. |
stack_user_domain_id = None |
(String) Keystone domain ID which contains heat template-defined users. If this option is set, stack_user_domain_name option will be ignored. |
stack_user_domain_name = None |
(String) Keystone domain name which contains heat template-defined users. If stack_user_domain_id option is set, this option is ignored. |
stale_token_duration = 30 |
(Integer) Gap, in seconds, to determine whether the given token is about to expire. |
trusts_delegated_roles = |
(List) Subset of trustor roles to be delegated to heat. If left unset, all roles of a user will be delegated to heat when creating a stack. |
[auth_password] | |
allowed_auth_uris = |
(List) Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified. |
multi_cloud = False |
(Boolean) Allow orchestration of multiple clouds. |
[ec2authtoken] | |
allowed_auth_uris = |
(List) Allowed keystone endpoints for auth_uri when multi_cloud is enabled. At least one endpoint needs to be specified. |
auth_uri = None |
(String) Authentication Endpoint URI. |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
insecure = False |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
multi_cloud = False |
(Boolean) Allow orchestration of multiple clouds. |
[eventlet_opts] | |
client_socket_timeout = 900 |
(Integer) Timeout for client connections’ socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of ‘0’ means wait forever. |
wsgi_keep_alive = True |
(Boolean) If False, closes the client socket connection explicitly. |
[heat_api] | |
backlog = 4096 |
(Integer) Number of backlog requests to configure the socket with. |
bind_host = 0.0.0.0 |
(IP) Address to bind the server. Useful when selecting a particular network interface. |
bind_port = 8004 |
(Port number) The port on which the server will listen. |
cert_file = None |
(String) Location of the SSL certificate file to use for SSL mode. |
key_file = None |
(String) Location of the SSL key file to use for enabling SSL mode. |
max_header_line = 16384 |
(Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). |
tcp_keepidle = 600 |
(Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. |
workers = 0 |
(Integer) Number of workers for Heat service. Default value 0 means, that service will start number of workers equal number of cores on server. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
[oslo_versionedobjects] | |
fatal_exception_format_errors = False |
(Boolean) Make exception message format errors fatal |
[paste_deploy] | |
api_paste_config = api-paste.ini |
(String) The API paste config file to use. |
flavor = None |
(String) The flavor to use. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
instance_connection_https_validate_certificates = 1 |
(String) Instance connection to CFN/CW API validate certs if SSL is used. |
instance_connection_is_secure = 0 |
(String) Instance connection to CFN/CW API via https. |
[heat_api_cfn] | |
backlog = 4096 |
(Integer) Number of backlog requests to configure the socket with. |
bind_host = 0.0.0.0 |
(IP) Address to bind the server. Useful when selecting a particular network interface. |
bind_port = 8000 |
(Port number) The port on which the server will listen. |
cert_file = None |
(String) Location of the SSL certificate file to use for SSL mode. |
key_file = None |
(String) Location of the SSL key file to use for enabling SSL mode. |
max_header_line = 16384 |
(Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). |
tcp_keepidle = 600 |
(Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. |
workers = 1 |
(Integer) Number of workers for Heat service. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
enable_cloud_watch_lite = False |
(Boolean) Enable the legacy OS::Heat::CWLiteAlarm resource. |
heat_watch_server_url = |
(String) URL of the Heat CloudWatch server. |
[heat_api_cloudwatch] | |
backlog = 4096 |
(Integer) Number of backlog requests to configure the socket with. |
bind_host = 0.0.0.0 |
(IP) Address to bind the server. Useful when selecting a particular network interface. |
bind_port = 8003 |
(Port number) The port on which the server will listen. |
cert_file = None |
(String) Location of the SSL certificate file to use for SSL mode. |
key_file = None |
(String) Location of the SSL key file to use for enabling SSL mode. |
max_header_line = 16384 |
(Integer) Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs.) |
tcp_keepidle = 600 |
(Integer) The value for the socket option TCP_KEEPIDLE. This is the time in seconds that the connection must be idle before TCP starts sending keepalive probes. |
workers = 1 |
(Integer) Number of workers for Heat service. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
heat_metadata_server_url = None |
(String) URL of the Heat metadata server. NOTE: Setting this is only needed if you require instances to use a different endpoint than in the keystone catalog |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
heat_waitcondition_server_url = None |
(String) URL of the Heat waitcondition server. |
The following options allow configuration of the clients that Orchestration uses to talk to other services.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
region_name_for_services = None |
(String) Default region name used to get services endpoints. |
[clients] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = publicURL |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = False |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
cloud_backend = heat.engine.clients.OpenStackClients |
(String) Fully qualified class name to use as a client backend. |
Configuration option = Default value | Description |
---|---|
[clients_aodh] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_barbican] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_ceilometer] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_cinder] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
http_log_debug = False |
(Boolean) Allow client’s debug log output. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_designate] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_glance] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_heat] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
url = |
(String) Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s. |
Configuration option = Default value | Description |
---|---|
[clients_keystone] | |
auth_uri = |
(String) Unversioned keystone url in format like http://0.0.0.0:5000. |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_magnum] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_manila] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_mistral] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_monasca] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_neutron] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_nova] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
http_log_debug = False |
(Boolean) Allow client’s debug log output. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_sahara] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_senlin] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_swift] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_trove] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
Configuration option = Default value | Description |
---|---|
[clients_zaqar] | |
ca_file = None |
(String) Optional CA cert file to use in SSL connections. |
cert_file = None |
(String) Optional PEM-formatted certificate chain file. |
endpoint_type = None |
(String) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
insecure = None |
(Boolean) If set, then the server’s certificate will not be verified. |
key_file = None |
(String) Optional PEM-formatted file that contains the private key. |
These options can also be set in the heat.conf
file.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
client_retry_limit = 2 |
(Integer) Number of times to retry when a client encounters an expected intermittent error. Set to 0 to disable retries. |
convergence_engine = True |
(Boolean) Enables engine with convergence architecture. All stacks with this option will be created using convergence engine. |
default_deployment_signal_transport = CFN_SIGNAL |
(String) Template default for how the server should signal to heat with the deployment output values. CFN_SIGNAL will allow an HTTP POST to a CFN keypair signed URL (requires enabled heat-api-cfn). TEMP_URL_SIGNAL will create a Swift TempURL to be signaled via HTTP PUT (requires object-store endpoint which supports TempURL). HEAT_SIGNAL will allow calls to the Heat API resource-signal using the provided keystone credentials. ZAQAR_SIGNAL will create a dedicated zaqar queue to be signaled using the provided keystone credentials. |
default_software_config_transport = POLL_SERVER_CFN |
(String) Template default for how the server should receive the metadata required for software configuration. POLL_SERVER_CFN will allow calls to the cfn API action DescribeStackResource authenticated with the provided keypair (requires enabled heat-api-cfn). POLL_SERVER_HEAT will allow calls to the Heat API resource-show using the provided keystone credentials (requires keystone v3 API, and configured stack_user_* config options). POLL_TEMP_URL will create and populate a Swift TempURL with metadata for polling (requires object-store endpoint which supports TempURL).ZAQAR_MESSAGE will create a dedicated zaqar queue and post the metadata for polling. |
deferred_auth_method = trusts |
(String) Select deferred auth method, stored password or trusts. |
environment_dir = /etc/heat/environment.d |
(String) The directory to search for environment files. |
error_wait_time = 240 |
(Integer) The amount of time in seconds after an error has occurred that tasks may continue to run before being cancelled. |
event_purge_batch_size = 10 |
(Integer) Controls how many events will be pruned whenever a stack’s events exceed max_events_per_stack. Set this lower to keep more events at the expense of more frequent purges. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
host = localhost |
(String) Name of the engine node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. |
keystone_backend = heat.engine.clients.os.keystone.heat_keystoneclient.KsClientWrapper |
(String) Fully qualified class name to use as a keystone backend. |
max_interface_check_attempts = 10 |
(Integer) Number of times to check whether an interface has been attached or detached. |
periodic_interval = 60 |
(Integer) Seconds between running periodic tasks. |
plugin_dirs = /usr/lib64/heat, /usr/lib/heat, /usr/local/lib/heat, /usr/local/lib64/heat |
(List) List of directories to search for plug-ins. |
reauthentication_auth_method = |
(String) Allow reauthentication on token expiry, such that long-running tasks may complete. Note this defeats the expiry of any provided user tokens. |
template_dir = /etc/heat/templates |
(String) The directory to search for template files. |
[constraint_validation_cache] | |
caching = True |
(Boolean) Toggle to enable/disable caching when Orchestration Engine validates property constraints of stack.During property validation with constraints Orchestration Engine caches requests to other OpenStack services. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature. |
expiration_time = 60 |
(Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of validation constraints. |
[resource_finder_cache] | |
caching = True |
(Boolean) Toggle to enable/disable caching when Orchestration Engine looks for other OpenStack service resources using name or id. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature. |
expiration_time = 3600 |
(Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of OpenStack service finder functions. |
[revision] | |
heat_revision = unknown |
(String) Heat build revision. If you would prefer to manage your build revision separately, you can move this section to a different file and add it as another config option. |
[service_extension_cache] | |
caching = True |
(Boolean) Toggle to enable/disable caching when Orchestration Engine retrieves extensions from other OpenStack services. Please note that the global toggle for oslo.cache(enabled=True in [cache] group) must be enabled to use this feature. |
expiration_time = 3600 |
(Integer) TTL, in seconds, for any cached item in the dogpile.cache region used for caching of service extensions. |
[volumes] | |
backups_enabled = True |
(Boolean) Indicate if cinder-backup service is enabled. This is a temporary workaround until cinder-backup service becomes discoverable, see LP#1334856. |
[yaql] | |
limit_iterators = 200 |
(Integer) The maximum number of elements in collection expression can take for its evaluation. |
memory_quota = 10000 |
(Integer) The maximum size of memory in bytes that expression can take for its evaluation. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
auth_encryption_key = notgood but just long enough i t |
(String) Key used to encrypt authentication info in the database. Length of this key must be 32 characters. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
loadbalancer_template = None |
(String) Custom template for the built-in loadbalancer nested stack. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
max_events_per_stack = 1000 |
(Integer) Maximum events that will be available per stack. Older events will be deleted when this is reached. Set to 0 for unlimited events per stack. |
max_nested_stack_depth = 5 |
(Integer) Maximum depth allowed when using nested stacks. |
max_resources_per_stack = 1000 |
(Integer) Maximum resources allowed per top-level stack. -1 stands for unlimited. |
max_server_name_length = 53 |
(Integer) Maximum length of a server name to be used in nova. |
max_stacks_per_tenant = 100 |
(Integer) Maximum number of stacks any one tenant may have active at one time. |
max_template_size = 524288 |
(Integer) Maximum raw byte size of any template. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[profiler] | |
connection_string = messaging:// |
(String) Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values:
|
enabled = False |
(Boolean) Enables the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values:
|
hmac_keys = SECRET_KEY |
(String) Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,...<keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both “enabled” flag and “hmac_keys” config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. |
trace_sqlalchemy = False |
(Boolean) Enables SQL requests profiling in services. Default value is False (SQL requests won’t be traced). Possible values:
|
Configuration option = Default value | Description |
---|---|
[trustee] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
The corresponding log file of each Orchestration service is stored in the
/var/log/heat/
directory of the host on which each service runs.
Log filename | Service that logs to the file |
---|---|
heat-api.log |
Orchestration service API Service |
heat-engine.log |
Orchestration service Engine Service |
heat-manage.log |
Orchestration service events |
Option = default value | (Type) Help string |
---|---|
[DEFAULT] max_server_name_length = 53 |
(IntOpt) Maximum length of a server name to be used in nova. |
[DEFAULT] template_dir = /etc/heat/templates |
(StrOpt) The directory to search for template files. |
[clients_aodh] ca_file = None |
(StrOpt) Optional CA cert file to use in SSL connections. |
[clients_aodh] cert_file = None |
(StrOpt) Optional PEM-formatted certificate chain file. |
[clients_aodh] endpoint_type = None |
(StrOpt) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
[clients_aodh] insecure = None |
(BoolOpt) If set, then the server’s certificate will not be verified. |
[clients_aodh] key_file = None |
(StrOpt) Optional PEM-formatted file that contains the private key. |
[clients_monasca] ca_file = None |
(StrOpt) Optional CA cert file to use in SSL connections. |
[clients_monasca] cert_file = None |
(StrOpt) Optional PEM-formatted certificate chain file. |
[clients_monasca] endpoint_type = None |
(StrOpt) Type of endpoint in Identity service catalog to use for communication with the OpenStack service. |
[clients_monasca] insecure = None |
(BoolOpt) If set, then the server’s certificate will not be verified. |
[clients_monasca] key_file = None |
(StrOpt) Optional PEM-formatted file that contains the private key. |
[trustee] auth_type = None |
(Opt) Authentication type to load |
[volumes] backups_enabled = True |
(BoolOpt) Indicate if cinder-backup service is enabled. This is a temporary workaround until cinder-backup service becomes discoverable, see LP#1334856. |
[yaql] limit_iterators = 200 |
(IntOpt) The maximum number of elements in collection expression can take for its evaluation. |
[yaql] memory_quota = 10000 |
(IntOpt) The maximum size of memory in bytes that expression can take for its evaluation. |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] convergence_engine |
False |
True |
[DEFAULT] keystone_backend |
heat.common.heat_keystoneclient.KeystoneClientV3 |
heat.engine.clients.os.keystone.heat_keystoneclient.KsClientWrapper |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
The Orchestration service is designed to manage the lifecycle of infrastructure
and applications within OpenStack clouds. Its various agents and services are
configured in the /etc/heat/heat.conf
file.
To install Orchestration, see the Newton Installation Tutorials and Guides for your distribution.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
The following tables provide a comprehensive list of the Telemetry configuration options.
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
api_paste_config = api_paste.ini |
(String) Configuration file for WSGI definition of API. |
event_pipeline_cfg_file = event_pipeline.yaml |
(String) Configuration file for event pipeline definition. |
pipeline_cfg_file = pipeline.yaml |
(String) Configuration file for pipeline definition. |
pipeline_polling_interval = 20 |
(Integer) Polling interval for pipeline file configuration in seconds. |
refresh_event_pipeline_cfg = False |
(Boolean) Refresh Event Pipeline configuration on-the-fly. |
refresh_pipeline_cfg = False |
(Boolean) Refresh Pipeline configuration on-the-fly. |
reserved_metadata_keys = |
(List) List of metadata keys reserved for metering use. And these keys are additional to the ones included in the namespace. |
reserved_metadata_length = 256 |
(Integer) Limit on length of reserved metadata values. |
reserved_metadata_namespace = metering. |
(List) List of metadata prefixes reserved for metering use. |
[api] | |
aodh_is_enabled = None |
(Boolean) Set True to redirect alarms URLs to aodh. Default autodetection by querying keystone. |
aodh_url = None |
(String) The endpoint of Aodh to redirect alarms URLs to Aodh API. Default autodetection by querying keystone. |
default_api_return_limit = 100 |
(Integer) Default maximum number of items returned by API request. |
gnocchi_is_enabled = None |
(Boolean) Set True to disable resource/meter/sample URLs. Default autodetection by querying keystone. |
panko_is_enabled = None |
(Boolean) Set True to redirect events URLs to Panko. Default autodetection by querying keystone. |
panko_url = None |
(String) The endpoint of Panko to redirect events URLs to Panko API. Default autodetection by querying keystone. |
pecan_debug = False |
(Boolean) Toggle Pecan Debug Middleware. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
Configuration option = Default value | Description |
---|---|
[service_credentials] | |
auth_section = None |
(Unknown) Config Section from which to load plugin specific options |
auth_type = None |
(Unknown) Authentication type to load |
cafile = None |
(String) PEM encoded Certificate Authority to use when verifying HTTPs connections. |
certfile = None |
(String) PEM encoded client certificate cert file |
insecure = False |
(Boolean) Verify HTTPS connections. |
interface = public |
(String) Type of endpoint in Identity service catalog to use for communication with OpenStack services. |
keyfile = None |
(String) PEM encoded client certificate key file |
region_name = None |
(String) Region name to use for OpenStack service endpoints. |
timeout = None |
(Integer) Timeout value for http requests |
Configuration option = Default value | Description |
---|---|
[collector] | |
batch_size = 1 |
(Integer) Number of notification messages to wait before dispatching them |
batch_timeout = None |
(Integer) Number of seconds to wait before dispatching sampleswhen batch_size is not reached (None means indefinitely) |
udp_address = 0.0.0.0 |
(String) Address to which the UDP socket is bound. Set to an empty string to disable. |
udp_port = 4952 |
(Port number) Port to which the UDP socket is bound. |
workers = 1 |
(Integer) Number of workers for collector service. default value is 1. |
[dispatcher_file] | |
backup_count = 0 |
(Integer) The max number of the files to keep. |
file_path = None |
(String) Name and the location of the file to record meters. |
max_bytes = 0 |
(Integer) The max size of the file. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
batch_polled_samples = True |
(Boolean) To reduce polling agent load, samples are sent to the notification agent in a batch. To gain higher throughput at the cost of load set this to False. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
host = <your_hostname> |
(String) Name of this node, which must be valid in an AMQP key. Can be an opaque identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address. |
http_timeout = 600 |
(Integer) Timeout seconds for HTTP requests. Set it to None to disable timeout. |
polling_namespaces = ['compute', 'central'] |
(Unknown) Polling namespace(s) to be used while resource polling |
pollster_list = [] |
(Unknown) List of pollsters (or wildcard templates) to be used while polling |
rootwrap_config = /etc/ceilometer/rootwrap.conf |
(String) Path to the rootwrap configuration file touse for running commands as root |
shuffle_time_before_polling_task = 0 |
(Integer) To reduce large requests at same time to Nova or other components from different compute agents, shuffle start time of polling task. |
[compute] | |
resource_update_interval = 0 |
(Integer) New instances will be discovered periodically based on this option (in seconds). By default, the agent discovers instances according to pipeline polling interval. If option is greater than 0, the instance list to poll will be updated based on this option’s interval. Measurements relating to the instances will match intervals defined in pipeline. |
workload_partitioning = False |
(Boolean) Enable work-load partitioning, allowing multiple compute agents to be run simultaneously. |
[coordination] | |
backend_url = None |
(String) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and per-host compute agent won’t do workload partitioning and will only function correctly if a single instance of that service is running. |
check_watchers = 10.0 |
(Floating point) Number of seconds between checks to see if group membership has changed |
heartbeat = 1.0 |
(Floating point) Number of seconds between heartbeats for distributed coordination. |
max_retry_interval = 30 |
(Integer) Maximum number of seconds between retry to join partitioning group |
retry_backoff = 1 |
(Integer) Retry backoff factor when retrying to connect withcoordination backend |
[database] | |
event_connection = None |
(String) The connection string used to connect to the event database. (if unset, connection is used) |
event_time_to_live = -1 |
(Integer) Number of seconds that events are kept in the database for (<= 0 means forever). |
metering_connection = None |
(String) The connection string used to connect to the metering database. (if unset, connection is used) |
metering_time_to_live = -1 |
(Integer) Number of seconds that samples are kept in the database for (<= 0 means forever). |
sql_expire_samples_only = False |
(Boolean) Indicates if expirer expires only samples. If set true, expired samples will be deleted, but residual resource and meter definition data will remain. |
[meter] | |
meter_definitions_cfg_file = meters.yaml |
(String) Configuration file for defining meter notifications. |
[polling] | |
partitioning_group_prefix = None |
(String) Work-load partitioning group prefix. Use only if you want to run multiple polling agents with different config files. For each sub-group of the agent pool with the same partitioning_group_prefix a disjoint subset of pollsters should be loaded. |
[publisher] | |
telemetry_secret = change this for valid signing |
(String) Secret value for signing messages. Set value empty if signing is not required to avoid computational overhead. |
[publisher_notifier] | |
event_topic = event |
(String) The topic that ceilometer uses for event notifications. |
metering_topic = metering |
(String) The topic that ceilometer uses for metering notifications. |
telemetry_driver = messagingv2 |
(String) The driver that ceilometer uses for metering notifications. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
nova_http_log_debug = False |
(Boolean) DEPRECATED: Allow novaclient’s debug log output. (Use default_log_levels instead) |
Configuration option = Default value | Description |
---|---|
[dispatcher_http] | |
event_target = None |
(String) The target for event data where the http request will be sent to. If this is not set, it will default to same as Sample target. |
target = |
(String) The target where the http request will be sent. If this is not set, no data will be posted. For example: target = http://hostname:1234/path |
timeout = 5 |
(Integer) The max time in seconds to wait for a request to timeout. |
verify_ssl = None |
(String) The path to a server certificate or directory if the system CAs are not used or if a self-signed certificate is used. Set to False to ignore SSL cert verification. |
Configuration option = Default value | Description |
---|---|
[event] | |
definitions_cfg_file = event_definitions.yaml |
(String) Configuration file for event definitions. |
drop_unmatched_notifications = False |
(Boolean) Drop notifications if no event definition matches. (Otherwise, we convert them with just the default traits) |
store_raw = [] |
(Multi-valued) Store the raw notification for select priority levels (info and/or error). By default, raw details are not captured. |
[notification] | |
ack_on_event_error = True |
(Boolean) Acknowledge message when event persistence fails. |
workers = 1 |
(Integer) Number of workers for notification service, default value is 1. |
workload_partitioning = False |
(Boolean) Enable workload partitioning, allowing multiple notification agents to be run simultaneously. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
ceilometer_control_exchange = ceilometer |
(String) Exchange name for ceilometer notifications. |
cinder_control_exchange = cinder |
(String) Exchange name for Cinder notifications. |
dns_control_exchange = central |
(String) Exchange name for DNS service notifications. |
glance_control_exchange = glance |
(String) Exchange name for Glance notifications. |
heat_control_exchange = heat |
(String) Exchange name for Heat notifications |
http_control_exchanges = ['nova', 'glance', 'neutron', 'cinder'] |
(Multi-valued) Exchanges name to listen for notifications. |
ironic_exchange = ironic |
(String) Exchange name for Ironic notifications. |
keystone_control_exchange = keystone |
(String) Exchange name for Keystone notifications. |
magnum_control_exchange = magnum |
(String) Exchange name for Magnum notifications. |
neutron_control_exchange = neutron |
(String) Exchange name for Neutron notifications. |
nova_control_exchange = nova |
(String) Exchange name for Nova notifications. |
sahara_control_exchange = sahara |
(String) Exchange name for Data Processing notifications. |
sample_source = openstack |
(String) Source for samples emitted on this instance. |
swift_control_exchange = swift |
(String) Exchange name for Swift notifications. |
trove_control_exchange = trove |
(String) Exchange name for DBaaS notifications. |
Configuration option = Default value | Description |
---|---|
[hyperv] | |
force_volumeutils_v1 = False |
(Boolean) DEPRECATED: Force V1 volume utility class |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
hypervisor_inspector = libvirt |
(String) Inspector to use for inspecting the hypervisor layer. Known inspectors are libvirt, hyperv, vmware, xenapi and powervm. |
libvirt_type = kvm |
(String) Libvirt domain type. |
libvirt_uri = |
(String) Override the default libvirt URI (which is dependent on libvirt_type). |
Configuration option = Default value | Description |
---|---|
[ipmi] | |
node_manager_init_retry = 3 |
(Integer) Number of retries upon Intel Node Manager initialization failure |
polling_retry = 3 |
(Integer) Tolerance of IPMI/NM polling failures before disable this pollster. Negative indicates retrying forever. |
Configuration option = Default value | Description |
---|---|
[notification] | |
batch_size = 100 |
(Integer) Number of notification messages to wait before publishing them. Batching is advised when transformations areapplied in pipeline. |
batch_timeout = 5 |
(Integer) Number of seconds to wait before publishing sampleswhen batch_size is not reached (None means indefinitely) |
disable_non_metric_meters = True |
(Boolean) WARNING: Ceilometer historically offered the ability to store events as meters. This usage is NOT advised as it can flood the metering database and cause performance degradation. |
messaging_urls = [] |
(Multi-valued) Messaging URLs to listen for notifications. Example: rabbit://user:pass@host1:port1[,user:pass@hostN:portN]/virtual_host (DEFAULT/transport_url is used if empty). This is useful when you have dedicate messaging nodes for each service, for example, all nova notifications go to rabbit-nova:5672, while all cinder notifications go to rabbit-cinder:5672. |
pipeline_processing_queues = 10 |
(Integer) Number of queues to parallelize workload across. This value should be larger than the number of active notification agents for optimal results. WARNING: Once set, lowering this value may result in lost data. |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
Configuration option = Default value | Description |
---|---|
[rgw_admin_credentials] | |
access_key = None |
(String) Access key for Radosgw Admin. |
secret_key = None |
(String) Secret key for Radosgw Admin. |
Configuration option = Default value | Description |
---|---|
[service_types] | |
glance = image |
(String) Glance service type. |
kwapi = energy |
(String) Kwapi service type. |
neutron = network |
(String) Neutron service type. |
neutron_lbaas_version = v2 |
(String) Neutron load balancer version. |
nova = compute |
(String) Nova service type. |
radosgw = object-store |
(String) Radosgw service type. |
swift = object-store |
(String) Swift service type. |
Configuration option = Default value | Description |
---|---|
[storage] | |
max_retries = 10 |
(Integer) Maximum number of connection retries during startup. Set to -1 to specify an infinite retry count. |
retry_interval = 10 |
(Integer) Interval (in seconds) between retries of connection. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
reseller_prefix = AUTH_ |
(String) Swift reseller prefix. Must be on par with reseller_prefix in proxy-server.conf. |
Configuration option = Default value | Description |
---|---|
[hardware] | |
meter_definitions_file = snmp.yaml |
(String) Configuration file for defining hardware snmp meters. |
readonly_user_auth_proto = None |
(String) SNMPd v3 authentication algorithm of all the nodes running in the cloud |
readonly_user_name = ro_snmp_user |
(String) SNMPd user name of all nodes running in the cloud. |
readonly_user_password = password |
(String) SNMPd v3 authentication password of all the nodes running in the cloud. |
readonly_user_priv_password = None |
(String) SNMPd v3 encryption password of all the nodes running in the cloud. |
readonly_user_priv_proto = None |
(String) SNMPd v3 encryption algorithm of all the nodes running in the cloud |
url_scheme = snmp:// |
(String) URL scheme to use for hardware nodes. |
Configuration option = Default value | Description |
---|---|
[vmware] | |
api_retry_count = 10 |
(Integer) Number of times a VMware vSphere API may be retried. |
ca_file = None |
(String) CA bundle file to use in verifying the vCenter server certificate. |
host_ip = |
(String) IP address of the VMware vSphere host. |
host_password = |
(String) Password of VMware vSphere. |
host_port = 443 |
(Port number) Port of the VMware vSphere host. |
host_username = |
(String) Username of VMware vSphere. |
insecure = False |
(Boolean) If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. This option is ignored if “ca_file” is set. |
task_poll_interval = 0.5 |
(Floating point) Sleep time in seconds for polling an ongoing async task. |
wsdl_location = None |
(String) Optional vim service WSDL location e.g http://<server>/vimService.wsdl. Optional over-ride to default location for bug work-arounds. |
Configuration option = Default value | Description |
---|---|
[xenapi] | |
connection_password = None |
(String) Password for connection to XenServer/Xen Cloud Platform. |
connection_url = None |
(String) URL for connection to XenServer/Xen Cloud Platform. |
connection_username = root |
(String) Username for connection to XenServer/Xen Cloud Platform. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
zaqar_control_exchange = zaqar |
(String) Exchange name for Messaging service notifications. |
The following tables provide a comprehensive list of the Telemetry Alarming service configuration options.
Configuration option = Default value | Description |
---|---|
[api] | |
alarm_max_actions = -1 |
(Integer) Maximum count of actions for each state of an alarm, non-positive number means no limit. |
enable_combination_alarms = False |
(Boolean) DEPRECATED: Enable deprecated combination alarms. Combination alarms are deprecated. This option and combination alarms will be removed in Aodh 5.0. |
paste_config = api_paste.ini |
(String) Configuration file for WSGI definition of API. |
pecan_debug = False |
(Boolean) Toggle Pecan Debug Middleware. |
project_alarm_quota = None |
(Integer) Maximum number of alarms defined for a project. |
user_alarm_quota = None |
(Integer) Maximum number of alarms defined for a user. |
workers = 1 |
(Integer) Number of workers for aodh API server. |
[oslo_middleware] | |
enable_proxy_headers_parsing = False |
(Boolean) Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. |
max_request_body_size = 114688 |
(Integer) The maximum body size for each request, in bytes. |
secure_proxy_ssl_header = X-Forwarded-Proto |
(String) DEPRECATED: The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. |
Configuration option = Default value | Description |
---|---|
[DEFAULT] | |
additional_ingestion_lag = 0 |
(Integer) The number of seconds to extend the evaluation windows to compensate the reporting/ingestion lag. |
evaluation_interval = 60 |
(Integer) Period of evaluation cycle, should be >= than configured pipeline interval for collection of underlying meters. |
event_alarm_cache_ttl = 60 |
(Integer) TTL of event alarm caches, in seconds. Set to 0 to disable caching. |
executor_thread_pool_size = 64 |
(Integer) Size of executor thread pool. |
http_timeout = 600 |
(Integer) Timeout seconds for HTTP requests. Set it to None to disable timeout. |
notifier_topic = alarming |
(String) The topic that aodh uses for alarm notifier messages. |
record_history = True |
(Boolean) Record alarm change events. |
rest_notifier_ca_bundle_certificate_path = None |
(String) SSL CA_BUNDLE certificate for REST notifier |
rest_notifier_certificate_file = |
(String) SSL Client certificate file for REST notifier. |
rest_notifier_certificate_key = |
(String) SSL Client private key file for REST notifier. |
rest_notifier_max_retries = 0 |
(Integer) Number of retries for REST notifier |
rest_notifier_ssl_verify = True |
(Boolean) Whether to verify the SSL Server certificate when calling alarm action. |
[database] | |
alarm_history_time_to_live = -1 |
(Integer) Number of seconds that alarm histories are kept in the database for (<= 0 means forever). |
[evaluator] | |
workers = 1 |
(Integer) Number of workers for evaluator service. default value is 1. |
[listener] | |
batch_size = 1 |
(Integer) Number of notification messages to wait before dispatching them. |
batch_timeout = None |
(Integer) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely). |
event_alarm_topic = alarm.all |
(String) The topic that aodh uses for event alarm evaluation. |
workers = 1 |
(Integer) Number of workers for listener service. default value is 1. |
[notifier] | |
batch_size = 1 |
(Integer) Number of notification messages to wait before dispatching them. |
batch_timeout = None |
(Integer) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely). |
workers = 1 |
(Integer) Number of workers for notifier service. default value is 1. |
[service_credentials] | |
interface = public |
(String) Type of endpoint in Identity service catalog to use for communication with OpenStack services. |
region_name = None |
(String) Region name to use for OpenStack service endpoints. |
[service_types] | |
zaqar = messaging |
(String) Message queue service type. |
Configuration option = Default value | Description |
---|---|
[coordination] | |
backend_url = None |
(String) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and per-host compute agent won’t do workload partitioning and will only function correctly if a single instance of that service is running. |
check_watchers = 10.0 |
(Floating point) Number of seconds between checks to see if group membership has changed |
heartbeat = 1.0 |
(Floating point) Number of seconds between heartbeats for distributed coordination. |
max_retry_interval = 30 |
(Integer) Maximum number of seconds between retry to join partitioning group |
retry_backoff = 1 |
(Integer) Retry backoff factor when retrying to connect with coordination backend |
Configuration option = Default value | Description |
---|---|
[matchmaker_redis] | |
check_timeout = 20000 |
(Integer) Time in ms to wait before the transaction is killed. |
host = 127.0.0.1 |
(String) DEPRECATED: Host to locate redis. Replaced by [DEFAULT]/transport_url |
password = |
(String) DEPRECATED: Password for Redis server (optional). Replaced by [DEFAULT]/transport_url |
port = 6379 |
(Port number) DEPRECATED: Use this port to connect to redis host. Replaced by [DEFAULT]/transport_url |
sentinel_group_name = oslo-messaging-zeromq |
(String) Redis replica set name. |
sentinel_hosts = |
(List) DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g. [host:port, host1:port ... ] Replaced by [DEFAULT]/transport_url |
socket_timeout = 10000 |
(Integer) Timeout in ms on blocking socket operations |
wait_timeout = 2000 |
(Integer) Time in ms to wait between connection attempts. |
The corresponding log file of each Telemetry service is stored in the
/var/log/ceilometer/
directory of the host on which each service runs.
Log filename | Service that logs to the file |
---|---|
agent-notification.log |
Telemetry service notification agent |
alarm-evaluator.log |
Telemetry service alarm evaluation |
alarm-notifier.log |
Telemetry service alarm notification |
api.log |
Telemetry service API |
ceilometer-dbsync.log |
Informational messages |
central.log |
Telemetry service central agent |
collector.log |
Telemetry service collection |
compute.log |
Telemetry service compute agent |
All the files in this section can be found in the /etc/ceilometer/
directory.
The configuration for the Telemetry services and agents is found in the
ceilometer.conf
file.
This file must be modified after installation.
[DEFAULT]
#
# From ceilometer
#
# To reduce polling agent load, samples are sent to the notification agent in a
# batch. To gain higher throughput at the cost of load set this to False.
# (boolean value)
#batch_polled_samples = true
# To reduce large requests at same time to Nova or other components from
# different compute agents, shuffle start time of polling task. (integer value)
#shuffle_time_before_polling_task = 0
# Configuration file for WSGI definition of API. (string value)
#api_paste_config = api_paste.ini
# Polling namespace(s) to be used while resource polling (list value)
# Allowed values: compute, central, ipmi
#polling_namespaces = compute,central
# List of pollsters (or wildcard templates) to be used while polling (list
# value)
#pollster_list =
# Exchange name for Nova notifications. (string value)
#nova_control_exchange = nova
# List of metadata prefixes reserved for metering use. (list value)
#reserved_metadata_namespace = metering.
# Limit on length of reserved metadata values. (integer value)
#reserved_metadata_length = 256
# List of metadata keys reserved for metering use. And these keys are
# additional to the ones included in the namespace. (list value)
#reserved_metadata_keys =
# Inspector to use for inspecting the hypervisor layer. Known inspectors are
# libvirt, hyperv, vmware, xenapi and powervm. (string value)
#hypervisor_inspector = libvirt
# Libvirt domain type. (string value)
# Allowed values: kvm, lxc, qemu, uml, xen
#libvirt_type = kvm
# Override the default libvirt URI (which is dependent on libvirt_type).
# (string value)
#libvirt_uri =
# Dispatchers to process metering data. (multi valued)
# Deprecated group/name - [DEFAULT]/dispatcher
#meter_dispatchers = database
# Dispatchers to process event data. (multi valued)
# Deprecated group/name - [DEFAULT]/dispatcher
#event_dispatchers =
# Exchange name for Ironic notifications. (string value)
#ironic_exchange = ironic
# Exchanges name to listen for notifications. (multi valued)
#http_control_exchanges = nova
#http_control_exchanges = glance
#http_control_exchanges = neutron
#http_control_exchanges = cinder
# Exchange name for Neutron notifications. (string value)
#neutron_control_exchange = neutron
# DEPRECATED: Allow novaclient's debug log output. (Use default_log_levels
# instead) (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#nova_http_log_debug = false
# Swift reseller prefix. Must be on par with reseller_prefix in proxy-
# server.conf. (string value)
#reseller_prefix = AUTH_
# Configuration file for pipeline definition. (string value)
#pipeline_cfg_file = pipeline.yaml
# Configuration file for event pipeline definition. (string value)
#event_pipeline_cfg_file = event_pipeline.yaml
# Refresh Pipeline configuration on-the-fly. (boolean value)
#refresh_pipeline_cfg = false
# Refresh Event Pipeline configuration on-the-fly. (boolean value)
#refresh_event_pipeline_cfg = false
# Polling interval for pipeline file configuration in seconds. (integer value)
#pipeline_polling_interval = 20
# Source for samples emitted on this instance. (string value)
#sample_source = openstack
# Name of this node, which must be valid in an AMQP key. Can be an opaque
# identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.
# (string value)
#host = <your_hostname>
# Timeout seconds for HTTP requests. Set it to None to disable timeout.
# (integer value)
#http_timeout = 600
# Path to the rootwrap configuration file touse for running commands as root
# (string value)
#rootwrap_config = /etc/ceilometer/rootwrap.conf
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
# DEPRECATED: If set to false, the logging level will be set to WARNING instead
# of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any
# existing logging configuration files. For details about logging configuration
# files, see the Python logging module documentation. Note that when logging
# configuration files are used then all logging configuration is set in the
# configuration file and other logging configuration options are ignored (for
# example, logging_context_format_string). (string value)
# Note: This option can be changed without restarting.
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default:
# %(default)s . This option is ignored if log_config_append is set. (string
# value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set,
# logging will go to stderr as defined by use_stderr. This option is ignored if
# log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option
# is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or
# removed this handler will open a new log file with specified path
# instantaneously. It makes sense only if log_file option is specified and
# Linux platform is used. This option is ignored if log_config_append is set.
# (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be
# changed later to honor RFC5424. This option is ignored if log_config_append
# is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if
# log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is
# set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string
# value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message
# is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in
# logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored
# if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string
# value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string
# value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
#
# From oslo.messaging
#
# Size of RPC connection pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_conn_pool_size
#rpc_conn_pool_size = 30
# The pool size limit for connections expiration policy (integer value)
#conn_pool_min_size = 2
# The time-to-live in sec of idle connections in the pool (integer value)
#conn_pool_ttl = 1200
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
# Size of executor thread pool. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
#executor_thread_pool_size = 64
# Seconds to wait for a response from a call. (integer value)
#rpc_response_timeout = 60
# A URL representing the messaging driver to use and its full configuration.
# (string value)
#transport_url = <None>
# DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers
# include amqp and zmq. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rpc_backend = rabbit
# The default exchange under which topics are scoped. May be overridden by an
# exchange name specified in the transport_url option. (string value)
#control_exchange = openstack
#
# From oslo.service.service
#
# Enable eventlet backdoor. Acceptable values are 0, <port>, and
# <start>:<end>, where 0 results in listening on a random tcp port number;
# <port> results in listening on the specified port number (and not enabling
# backdoor if that port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range of port numbers.
# The chosen port is displayed in the service's log file. (string value)
#backdoor_port = <None>
# Enable eventlet backdoor, using the provided path as a unix socket that can
# receive connections. This option is mutually exclusive with 'backdoor_port'
# in that only one should be provided. If both are provided then the existence
# of this option overrides the usage of that option. (string value)
#backdoor_socket = <None>
# Enables or disables logging values of all registered options when starting a
# service (at DEBUG level). (boolean value)
#log_options = true
# Specify a timeout after which a gracefully shutdown server will exit. Zero
# value means endless wait. (integer value)
#graceful_shutdown_timeout = 60
[api]
#
# From ceilometer
#
# Toggle Pecan Debug Middleware. (boolean value)
#pecan_debug = false
# Default maximum number of items returned by API request. (integer value)
# Minimum value: 1
#default_api_return_limit = 100
[collector]
#
# From ceilometer
#
# Address to which the UDP socket is bound. Set to an empty string to disable.
# (string value)
#udp_address = 0.0.0.0
# Port to which the UDP socket is bound. (port value)
# Minimum value: 0
# Maximum value: 65535
#udp_port = 4952
# Number of notification messages to wait before dispatching them (integer
# value)
#batch_size = 1
# Number of seconds to wait before dispatching sampleswhen batch_size is not
# reached (None means indefinitely) (integer value)
#batch_timeout = <None>
# Number of workers for collector service. default value is 1. (integer value)
# Minimum value: 1
# Deprecated group/name - [DEFAULT]/collector_workers
#workers = 1
[compute]
#
# From ceilometer
#
# Enable work-load partitioning, allowing multiple compute agents to be run
# simultaneously. (boolean value)
#workload_partitioning = false
# New instances will be discovered periodically based on this option (in
# seconds). By default, the agent discovers instances according to pipeline
# polling interval. If option is greater than 0, the instance list to poll will
# be updated based on this option's interval. Measurements relating to the
# instances will match intervals defined in pipeline. (integer value)
# Minimum value: 0
#resource_update_interval = 0
[coordination]
#
# From ceilometer
#
# The backend URL to use for distributed coordination. If left empty, per-
# deployment central agent and per-host compute agent won't do workload
# partitioning and will only function correctly if a single instance of that
# service is running. (string value)
#backend_url = <None>
# Number of seconds between heartbeats for distributed coordination. (floating
# point value)
#heartbeat = 1.0
# Number of seconds between checks to see if group membership has changed
# (floating point value)
#check_watchers = 10.0
# Retry backoff factor when retrying to connect withcoordination backend
# (integer value)
#retry_backoff = 1
# Maximum number of seconds between retry to join partitioning group (integer
# value)
#max_retry_interval = 30
[cors]
#
# From oslo.middleware.cors
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-Openstack-Request-Id
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-Openstack-Request-Id
[cors.subdomain]
#
# From oslo.middleware.cors
#
# Indicate whether this resource may be shared with the domain received in the
# requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing
# slash. Example: https://horizon.example.com (list value)
#allowed_origin = <None>
# Indicate that the actual request can include user credentials (boolean value)
#allow_credentials = true
# Indicate which headers are safe to expose to the API. Defaults to HTTP Simple
# Headers. (list value)
#expose_headers = X-Auth-Token,X-Subject-Token,X-Service-Token,X-Openstack-Request-Id
# Maximum cache age of CORS preflight requests. (integer value)
#max_age = 3600
# Indicate which methods can be used during the actual request. (list value)
#allow_methods = GET,PUT,POST,DELETE,PATCH
# Indicate which header field names may be used during the actual request.
# (list value)
#allow_headers = X-Auth-Token,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id,X-Openstack-Request-Id
[database]
#
# From ceilometer
#
# Number of seconds that samples are kept in the database for (<= 0 means
# forever). (integer value)
# Deprecated group/name - [database]/time_to_live
#metering_time_to_live = -1
# Number of seconds that events are kept in the database for (<= 0 means
# forever). (integer value)
#event_time_to_live = -1
# The connection string used to connect to the metering database. (if unset,
# connection is used) (string value)
#metering_connection = <None>
# The connection string used to connect to the event database. (if unset,
# connection is used) (string value)
#event_connection = <None>
# Indicates if expirer expires only samples. If set true, expired samples will
# be deleted, but residual resource and meter definition data will remain.
# (boolean value)
#sql_expire_samples_only = false
#
# From oslo.db
#
# DEPRECATED: The file name to use with SQLite. (string value)
# Deprecated group/name - [DEFAULT]/sqlite_db
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Should use config option connection or slave_connection to connect
# the database.
#sqlite_db = oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
# Deprecated group/name - [DEFAULT]/sqlite_synchronous
#sqlite_synchronous = true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend = sqlalchemy
# The SQLAlchemy connection string to use to connect to the database. (string
# value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection = <None>
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
# The SQL mode to be used for MySQL sessions. This option, including the
# default, overrides any server-set SQL mode. To use whatever SQL mode is set
# by the server configuration, set this to no value. Example: mysql_sql_mode=
# (string value)
#mysql_sql_mode = TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool. Setting a value of
# 0 indicates no limit. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = 5
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
# Minimum value: 0
# Maximum value: 100
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug = 0
# Add Python stack traces to SQL as comment strings. (boolean value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace = false
# If set, use this value for pool_timeout with SQLAlchemy. (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout = <None>
# Enable the experimental use of database reconnect on connection lost.
# (boolean value)
#use_db_reconnect = false
# Seconds between retries of a database transaction. (integer value)
#db_retry_interval = 1
# If True, increases the interval between retries of a database operation up to
# db_max_retry_interval. (boolean value)
#db_inc_retry_interval = true
# If db_inc_retry_interval is set, the maximum seconds between retries of a
# database operation. (integer value)
#db_max_retry_interval = 10
# Maximum retries in case of connection error or deadlock error before error is
# raised. Set to -1 to specify an infinite retry count. (integer value)
#db_max_retries = 20
[dispatcher_file]
#
# From ceilometer
#
# Name and the location of the file to record meters. (string value)
#file_path = <None>
# The max size of the file. (integer value)
#max_bytes = 0
# The max number of the files to keep. (integer value)
#backup_count = 0
[dispatcher_gnocchi]
#
# From ceilometer
#
# Filter out samples generated by Gnocchi service activity (boolean value)
#filter_service_activity = true
# Gnocchi project used to filter out samples generated by Gnocchi service
# activity (string value)
#filter_project = gnocchi
# The archive policy to use when the dispatcher create a new metric. (string
# value)
#archive_policy = <None>
# The Yaml file that defines mapping between samples and gnocchi
# resources/metrics (string value)
#resources_definition_file = gnocchi_resources.yaml
[event]
#
# From ceilometer
#
# Configuration file for event definitions. (string value)
#definitions_cfg_file = event_definitions.yaml
# Drop notifications if no event definition matches. (Otherwise, we convert
# them with just the default traits) (boolean value)
#drop_unmatched_notifications = false
# Store the raw notification for select priority levels (info and/or error). By
# default, raw details are not captured. (multi valued)
#store_raw =
[exchange_control]
#
# From ceilometer
#
# Exchange name for Heat notifications (string value)
#heat_control_exchange = heat
# Exchange name for Glance notifications. (string value)
#glance_control_exchange = glance
# Exchange name for Keystone notifications. (string value)
#keystone_control_exchange = keystone
# Exchange name for Cinder notifications. (string value)
#cinder_control_exchange = cinder
# Exchange name for Data Processing notifications. (string value)
#sahara_control_exchange = sahara
# Exchange name for Swift notifications. (string value)
#swift_control_exchange = swift
# Exchange name for Magnum notifications. (string value)
#magnum_control_exchange = magnum
# Exchange name for DBaaS notifications. (string value)
#trove_control_exchange = trove
# Exchange name for Messaging service notifications. (string value)
#zaqar_control_exchange = zaqar
# Exchange name for DNS service notifications. (string value)
#dns_control_exchange = central
[hardware]
#
# From ceilometer
#
# URL scheme to use for hardware nodes. (string value)
#url_scheme = snmp://
# SNMPd user name of all nodes running in the cloud. (string value)
#readonly_user_name = ro_snmp_user
# SNMPd v3 authentication password of all the nodes running in the cloud.
# (string value)
#readonly_user_password = password
# SNMPd v3 authentication algorithm of all the nodes running in the cloud
# (string value)
# Allowed values: md5, sha
#readonly_user_auth_proto = <None>
# SNMPd v3 encryption algorithm of all the nodes running in the cloud (string
# value)
# Allowed values: des, aes128, 3des, aes192, aes256
#readonly_user_priv_proto = <None>
# SNMPd v3 encryption password of all the nodes running in the cloud. (string
# value)
#readonly_user_priv_password = <None>
[ipmi]
#
# From ceilometer
#
# Number of retries upon Intel Node Manager initialization failure (integer
# value)
#node_manager_init_retry = 3
# Tolerance of IPMI/NM polling failures before disable this pollster. Negative
# indicates retrying forever. (integer value)
#polling_retry = 3
[keystone_authtoken]
#
# From keystonemiddleware.auth_token
#
# Complete "public" Identity API endpoint. This endpoint should not be an
# "admin" endpoint, as it should be accessible by all end users.
# Unauthenticated clients are redirected to this endpoint to authenticate.
# Although this endpoint should ideally be unversioned, client support in the
# wild varies. If you're using a versioned v2 endpoint here, then this should
# *not* be the same endpoint the service user utilizes for validating tokens,
# because normal end users may not be able to reach that endpoint. (string
# value)
#auth_uri = <None>
# API version of the admin Identity API endpoint. (string value)
#auth_version = <None>
# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false
# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = <None>
# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3
# Request environment key where the Swift cache object is stored. When
# auth_token middleware is deployed with a Swift cache, use this option to have
# the middleware share a caching backend with swift. Otherwise, use the
# ``memcached_servers`` option instead. (string value)
#cache = <None>
# Required if identity server requires client certificate (string value)
#certfile = <None>
# Required if identity server requires client certificate (string value)
#keyfile = <None>
# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = <None>
# Verify HTTPS connections. (boolean value)
#insecure = false
# The region in which the identity server can be found. (string value)
#region_name = <None>
# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = <None>
# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [keystone_authtoken]/memcache_servers
#memcached_servers = <None>
# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300
# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance. Only
# valid for PKI tokens. (integer value)
#revocation_cache_time = 10
# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
# in the cache. If ENCRYPT, token data is encrypted and authenticated in the
# cache. If the value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
# Allowed values: None, MAC, ENCRYPT
#memcache_security_strategy = None
# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = <None>
# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300
# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10
# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3
# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60
# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10
# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false
# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true
# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive
# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false
# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will be
# stored in the cache. This will typically be set to multiple values only while
# migrating from a less secure algorithm to a more secure one. Once all the old
# tokens are expired this option should be set to a single value for better
# performance. (list value)
#hash_algorithms = md5
# Authentication type to load (string value)
# Deprecated group/name - [keystone_authtoken]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
[matchmaker_redis]
#
# From oslo.messaging
#
# DEPRECATED: Host to locate redis. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#host = 127.0.0.1
# DEPRECATED: Use this port to connect to redis host. (port value)
# Minimum value: 0
# Maximum value: 65535
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#port = 6379
# DEPRECATED: Password for Redis server (optional). (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#password =
# DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode) e.g.
# [host:port, host1:port ... ] (list value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#sentinel_hosts =
# Redis replica set name. (string value)
#sentinel_group_name = oslo-messaging-zeromq
# Time in ms to wait between connection attempts. (integer value)
#wait_timeout = 2000
# Time in ms to wait before the transaction is killed. (integer value)
#check_timeout = 20000
# Timeout in ms on blocking socket operations (integer value)
#socket_timeout = 10000
[meter]
#
# From ceilometer
#
# Configuration file for defining meter notifications. (string value)
#meter_definitions_cfg_file = meters.yaml
[notification]
#
# From ceilometer
#
# Number of queues to parallelize workload across. This value should be larger
# than the number of active notification agents for optimal results. WARNING:
# Once set, lowering this value may result in lost data. (integer value)
# Minimum value: 1
#pipeline_processing_queues = 10
# Acknowledge message when event persistence fails. (boolean value)
# Deprecated group/name - [collector]/ack_on_event_error
#ack_on_event_error = true
# WARNING: Ceilometer historically offered the ability to store events as
# meters. This usage is NOT advised as it can flood the metering database and
# cause performance degradation. (boolean value)
#disable_non_metric_meters = true
# Enable workload partitioning, allowing multiple notification agents to be run
# simultaneously. (boolean value)
#workload_partitioning = false
# Messaging URLs to listen for notifications. Example:
# rabbit://user:pass@host1:port1[,user:pass@hostN:portN]/virtual_host
# (DEFAULT/transport_url is used if empty). This is useful when you have
# dedicate messaging nodes for each service, for example, all nova
# notifications go to rabbit-nova:5672, while all cinder notifications go to
# rabbit-cinder:5672. (multi valued)
#messaging_urls =
# Number of notification messages to wait before publishing them. Batching is
# advised when transformations areapplied in pipeline. (integer value)
# Minimum value: 1
#batch_size = 100
# Number of seconds to wait before publishing sampleswhen batch_size is not
# reached (None means indefinitely) (integer value)
#batch_timeout = 5
# Number of workers for notification service, default value is 1. (integer
# value)
# Minimum value: 1
# Deprecated group/name - [DEFAULT]/notification_workers
#workers = 1
[oslo_concurrency]
#
# From oslo.concurrency
#
# Enables or disables inter-process locks. (boolean value)
# Deprecated group/name - [DEFAULT]/disable_process_locking
#disable_process_locking = false
# Directory to use for lock files. For security, the specified directory
# should only be writable by the user running the processes that need locking.
# Defaults to environment variable OSLO_LOCK_PATH. If external locks are used,
# a lock path must be set. (string value)
# Deprecated group/name - [DEFAULT]/lock_path
#lock_path = <None>
[oslo_messaging_amqp]
#
# From oslo.messaging
#
# Name for the AMQP container. must be globally unique. Defaults to a generated
# UUID (string value)
# Deprecated group/name - [amqp1]/container_name
#container_name = <None>
# Timeout for inactive connections (in seconds) (integer value)
# Deprecated group/name - [amqp1]/idle_timeout
#idle_timeout = 0
# Debug: dump AMQP frames to stdout (boolean value)
# Deprecated group/name - [amqp1]/trace
#trace = false
# CA certificate PEM file to verify server certificate (string value)
# Deprecated group/name - [amqp1]/ssl_ca_file
#ssl_ca_file =
# Identifying certificate PEM file to present to clients (string value)
# Deprecated group/name - [amqp1]/ssl_cert_file
#ssl_cert_file =
# Private key PEM file used to sign cert_file certificate (string value)
# Deprecated group/name - [amqp1]/ssl_key_file
#ssl_key_file =
# Password for decrypting ssl_key_file (if encrypted) (string value)
# Deprecated group/name - [amqp1]/ssl_key_password
#ssl_key_password = <None>
# Accept clients using either SSL or plain TCP (boolean value)
# Deprecated group/name - [amqp1]/allow_insecure_clients
#allow_insecure_clients = false
# Space separated list of acceptable SASL mechanisms (string value)
# Deprecated group/name - [amqp1]/sasl_mechanisms
#sasl_mechanisms =
# Path to directory that contains the SASL configuration (string value)
# Deprecated group/name - [amqp1]/sasl_config_dir
#sasl_config_dir =
# Name of configuration file (without .conf suffix) (string value)
# Deprecated group/name - [amqp1]/sasl_config_name
#sasl_config_name =
# User name for message broker authentication (string value)
# Deprecated group/name - [amqp1]/username
#username =
# Password for message broker authentication (string value)
# Deprecated group/name - [amqp1]/password
#password =
# Seconds to pause before attempting to re-connect. (integer value)
# Minimum value: 1
#connection_retry_interval = 1
# Increase the connection_retry_interval by this many seconds after each
# unsuccessful failover attempt. (integer value)
# Minimum value: 0
#connection_retry_backoff = 2
# Maximum limit for connection_retry_interval + connection_retry_backoff
# (integer value)
# Minimum value: 1
#connection_retry_interval_max = 30
# Time to pause between re-connecting an AMQP 1.0 link that failed due to a
# recoverable error. (integer value)
# Minimum value: 1
#link_retry_delay = 10
# The deadline for an rpc reply message delivery. Only used when caller does
# not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_reply_timeout = 30
# The deadline for an rpc cast or call message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_send_timeout = 30
# The deadline for a sent notification message delivery. Only used when caller
# does not provide a timeout expiry. (integer value)
# Minimum value: 5
#default_notify_timeout = 30
# Indicates the addressing mode used by the driver.
# Permitted values:
# 'legacy' - use legacy non-routable addressing
# 'routable' - use routable addresses
# 'dynamic' - use legacy addresses if the message bus does not support routing
# otherwise use routable addressing (string value)
#addressing_mode = dynamic
# address prefix used when sending to a specific server (string value)
# Deprecated group/name - [amqp1]/server_request_prefix
#server_request_prefix = exclusive
# address prefix used when broadcasting to all servers (string value)
# Deprecated group/name - [amqp1]/broadcast_prefix
#broadcast_prefix = broadcast
# address prefix when sending to any server in group (string value)
# Deprecated group/name - [amqp1]/group_request_prefix
#group_request_prefix = unicast
# Address prefix for all generated RPC addresses (string value)
#rpc_address_prefix = openstack.org/om/rpc
# Address prefix for all generated Notification addresses (string value)
#notify_address_prefix = openstack.org/om/notify
# Appended to the address prefix when sending a fanout message. Used by the
# message bus to identify fanout messages. (string value)
#multicast_address = multicast
# Appended to the address prefix when sending to a particular RPC/Notification
# server. Used by the message bus to identify messages sent to a single
# destination. (string value)
#unicast_address = unicast
# Appended to the address prefix when sending to a group of consumers. Used by
# the message bus to identify messages that should be delivered in a round-
# robin fashion across consumers. (string value)
#anycast_address = anycast
# Exchange name used in notification addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_notification_exchange if set
# else control_exchange if set
# else 'notify' (string value)
#default_notification_exchange = <None>
# Exchange name used in RPC addresses.
# Exchange name resolution precedence:
# Target.exchange if set
# else default_rpc_exchange if set
# else control_exchange if set
# else 'rpc' (string value)
#default_rpc_exchange = <None>
# Window size for incoming RPC Reply messages. (integer value)
# Minimum value: 1
#reply_link_credit = 200
# Window size for incoming RPC Request messages (integer value)
# Minimum value: 1
#rpc_server_credit = 100
# Window size for incoming Notification messages (integer value)
# Minimum value: 1
#notify_server_credit = 100
[oslo_messaging_notifications]
#
# From oslo.messaging
#
# The Drivers(s) to handle sending notifications. Possible values are
# messaging, messagingv2, routing, log, test, noop (multi valued)
# Deprecated group/name - [DEFAULT]/notification_driver
#driver =
# A URL representing the messaging driver to use for notifications. If not set,
# we fall back to the same configuration used for RPC. (string value)
# Deprecated group/name - [DEFAULT]/notification_transport_url
#transport_url = <None>
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
# Deprecated group/name - [DEFAULT]/notification_topics
#topics = notifications
[oslo_messaging_rabbit]
#
# From oslo.messaging
#
# Use durable queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_durable_queues
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues = false
# Auto-delete queues in AMQP. (boolean value)
# Deprecated group/name - [DEFAULT]/amqp_auto_delete
#amqp_auto_delete = false
# SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
# SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
# distributions. (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_version
#kombu_ssl_version =
# SSL key file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_keyfile
#kombu_ssl_keyfile =
# SSL cert file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_certfile
#kombu_ssl_certfile =
# SSL certification authority file (valid only if SSL enabled). (string value)
# Deprecated group/name - [DEFAULT]/kombu_ssl_ca_certs
#kombu_ssl_ca_certs =
# How long to wait before reconnecting in response to an AMQP consumer cancel
# notification. (floating point value)
# Deprecated group/name - [DEFAULT]/kombu_reconnect_delay
#kombu_reconnect_delay = 1.0
# EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
# be used. This option may not be available in future versions. (string value)
#kombu_compression = <None>
# How long to wait a missing client before abandoning to send it its replies.
# This value should not be longer than rpc_response_timeout. (integer value)
# Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
#kombu_missing_consumer_retry_timeout = 60
# Determines how the next RabbitMQ node is chosen in case the one we are
# currently connected to becomes unavailable. Takes effect only if more than
# one RabbitMQ node is provided in config. (string value)
# Allowed values: round-robin, shuffle
#kombu_failover_strategy = round-robin
# DEPRECATED: The RabbitMQ broker address where a single node is used. (string
# value)
# Deprecated group/name - [DEFAULT]/rabbit_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_host = localhost
# DEPRECATED: The RabbitMQ broker port where a single node is used. (port
# value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rabbit_port
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_port = 5672
# DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value)
# Deprecated group/name - [DEFAULT]/rabbit_hosts
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_hosts = $rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_use_ssl
#rabbit_use_ssl = false
# DEPRECATED: The RabbitMQ userid. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_userid
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_userid = guest
# DEPRECATED: The RabbitMQ password. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_password
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_password = guest
# The RabbitMQ login method. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_login_method
#rabbit_login_method = AMQPLAIN
# DEPRECATED: The RabbitMQ virtual host. (string value)
# Deprecated group/name - [DEFAULT]/rabbit_virtual_host
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
# Reason: Replaced by [DEFAULT]/transport_url
#rabbit_virtual_host = /
# How frequently to retry connecting with RabbitMQ. (integer value)
#rabbit_retry_interval = 1
# How long to backoff for between retries when connecting to RabbitMQ. (integer
# value)
# Deprecated group/name - [DEFAULT]/rabbit_retry_backoff
#rabbit_retry_backoff = 2
# Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
# (integer value)
#rabbit_interval_max = 30
# DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
# Deprecated group/name - [DEFAULT]/rabbit_max_retries
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#rabbit_max_retries = 0
# Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
# option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
# is no longer controlled by the x-ha-policy argument when declaring a queue.
# If you just want to make sure that all queues (except those with auto-
# generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy
# HA '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_ha_queues
#rabbit_ha_queues = false
# Positive integer representing duration in seconds for queue TTL (x-expires).
# Queues which are unused for the duration of the TTL are automatically
# deleted. The parameter affects only reply and fanout queues. (integer value)
# Minimum value: 1
#rabbit_transient_queues_ttl = 1800
# Specifies the number of messages to prefetch. Setting to zero allows
# unlimited messages. (integer value)
#rabbit_qos_prefetch_count = 0
# Number of seconds after which the Rabbit broker is considered down if
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
#fake_rabbit = false
# Maximum number of channels to allow (integer value)
#channel_max = <None>
# The maximum byte size for an AMQP frame (integer value)
#frame_max = <None>
# How often to send heartbeats for consumer's connections (integer value)
#heartbeat_interval = 3
# Enable SSL (boolean value)
#ssl = <None>
# Arguments passed to ssl.wrap_socket (dict value)
#ssl_options = <None>
# Set socket timeout in seconds for connection's socket (floating point value)
#socket_timeout = 0.25
# Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point
# value)
#tcp_user_timeout = 0.25
# Set delay for reconnection to some host which has connection error (floating
# point value)
#host_connection_reconnect_delay = 0.25
# Connection factory implementation (string value)
# Allowed values: new, single, read_write
#connection_factory = single
# Maximum number of connections to keep queued. (integer value)
#pool_max_size = 30
# Maximum number of connections to create above `pool_max_size`. (integer
# value)
#pool_max_overflow = 0
# Default number of seconds to wait for a connections to available (integer
# value)
#pool_timeout = 30
# Lifetime of a connection (since creation) in seconds or None for no
# recycling. Expired connections are closed on acquire. (integer value)
#pool_recycle = 600
# Threshold at which inactive (since release) connections are considered stale
# in seconds or None for no staleness. Stale connections are closed on acquire.
# (integer value)
#pool_stale = 60
# Persist notification messages. (boolean value)
#notification_persistence = false
# Exchange name for sending notifications (string value)
#default_notification_exchange = ${control_exchange}_notification
# Max number of not acknowledged message which RabbitMQ can send to
# notification listener. (integer value)
#notification_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# notification, -1 means infinite retry. (integer value)
#default_notification_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# notification message (floating point value)
#notification_retry_delay = 0.25
# Time to live for rpc queues without consumers in seconds. (integer value)
#rpc_queue_expiration = 60
# Exchange name for sending RPC messages (string value)
#default_rpc_exchange = ${control_exchange}_rpc
# Exchange name for receiving RPC replies (string value)
#rpc_reply_exchange = ${control_exchange}_rpc_reply
# Max number of not acknowledged message which RabbitMQ can send to rpc
# listener. (integer value)
#rpc_listener_prefetch_count = 100
# Max number of not acknowledged message which RabbitMQ can send to rpc reply
# listener. (integer value)
#rpc_reply_listener_prefetch_count = 100
# Reconnecting retry count in case of connectivity problem during sending
# reply. -1 means infinite retry during rpc_timeout (integer value)
#rpc_reply_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending
# reply. (floating point value)
#rpc_reply_retry_delay = 0.25
# Reconnecting retry count in case of connectivity problem during sending RPC
# message, -1 means infinite retry. If actual retry attempts in not 0 the rpc
# request could be processed more then one time (integer value)
#default_rpc_retry_attempts = -1
# Reconnecting retry delay in case of connectivity problem during sending RPC
# message (floating point value)
#rpc_retry_delay = 0.25
[oslo_messaging_zmq]
#
# From oslo.messaging
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
# The "host" option should point or resolve to this address. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_address
#rpc_zmq_bind_address = *
# MatchMaker driver. (string value)
# Allowed values: redis, dummy
# Deprecated group/name - [DEFAULT]/rpc_zmq_matchmaker
#rpc_zmq_matchmaker = redis
# Number of ZeroMQ contexts, defaults to 1. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_contexts
#rpc_zmq_contexts = 1
# Maximum number of ingress messages to locally buffer per topic. Default is
# unlimited. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_topic_backlog
#rpc_zmq_topic_backlog = <None>
# Directory for holding IPC sockets. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_ipc_dir
#rpc_zmq_ipc_dir = /var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP address. Must match
# "host" option, if running Nova. (string value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_host
#rpc_zmq_host = localhost
# Seconds to wait before a cast expires (TTL). The default value of -1
# specifies an infinite linger period. The value of 0 specifies no linger
# period. Pending messages shall be discarded immediately when the socket is
# closed. Only supported by impl_zmq. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_cast_timeout
#rpc_cast_timeout = -1
# The default number of seconds that poll should wait. Poll raises timeout
# exception when timeout expired. (integer value)
# Deprecated group/name - [DEFAULT]/rpc_poll_timeout
#rpc_poll_timeout = 1
# Expiration timeout in seconds of a name service record about existing target
# ( < 0 means no timeout). (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_expire
#zmq_target_expire = 300
# Update period in seconds of a name service record about existing target.
# (integer value)
# Deprecated group/name - [DEFAULT]/zmq_target_update
#zmq_target_update = 180
# Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean
# value)
# Deprecated group/name - [DEFAULT]/use_pub_sub
#use_pub_sub = true
# Use ROUTER remote proxy. (boolean value)
# Deprecated group/name - [DEFAULT]/use_router_proxy
#use_router_proxy = true
# Minimal port number for random ports range. (port value)
# Minimum value: 0
# Maximum value: 65535
# Deprecated group/name - [DEFAULT]/rpc_zmq_min_port
#rpc_zmq_min_port = 49153
# Maximal port number for random ports range. (integer value)
# Minimum value: 1
# Maximum value: 65536
# Deprecated group/name - [DEFAULT]/rpc_zmq_max_port
#rpc_zmq_max_port = 65536
# Number of retries to find free port number before fail with ZMQBindError.
# (integer value)
# Deprecated group/name - [DEFAULT]/rpc_zmq_bind_port_retries
#rpc_zmq_bind_port_retries = 100
# Default serialization mechanism for serializing/deserializing
# outgoing/incoming messages (string value)
# Allowed values: json, msgpack
# Deprecated group/name - [DEFAULT]/rpc_zmq_serialization
#rpc_zmq_serialization = json
# This option configures round-robin mode in zmq socket. True means not keeping
# a queue when server side disconnects. False means to keep queue and messages
# even if server is disconnected, when the server appears we send all
# accumulated messages to it. (boolean value)
#zmq_immediate = false
[oslo_policy]
#
# From oslo.policy
#
# The JSON file that defines policies. (string value)
# Deprecated group/name - [DEFAULT]/policy_file
#policy_file = policy.json
# Default rule. Enforced when a requested rule is not found. (string value)
# Deprecated group/name - [DEFAULT]/policy_default_rule
#policy_default_rule = default
# Directories where policy configuration files are stored. They can be relative
# to any directory in the search path defined by the config_dir option, or
# absolute paths. The file defined by policy_file must exist for these
# directories to be searched. Missing or empty directories are ignored. (multi
# valued)
# Deprecated group/name - [DEFAULT]/policy_dirs
#policy_dirs = policy.d
[polling]
#
# From ceilometer
#
# Work-load partitioning group prefix. Use only if you want to run multiple
# polling agents with different config files. For each sub-group of the agent
# pool with the same partitioning_group_prefix a disjoint subset of pollsters
# should be loaded. (string value)
# Deprecated group/name - [central]/partitioning_group_prefix
#partitioning_group_prefix = <None>
[publisher]
#
# From ceilometer
#
# Secret value for signing messages. Set value empty if signing is not required
# to avoid computational overhead. (string value)
# Deprecated group/name - [DEFAULT]/metering_secret
# Deprecated group/name - [publisher_rpc]/metering_secret
# Deprecated group/name - [publisher]/metering_secret
#telemetry_secret = change this for valid signing
[publisher_notifier]
#
# From ceilometer
#
# The topic that ceilometer uses for metering notifications. (string value)
#metering_topic = metering
# The topic that ceilometer uses for event notifications. (string value)
#event_topic = event
# The driver that ceilometer uses for metering notifications. (string value)
# Deprecated group/name - [publisher_notifier]/metering_driver
#telemetry_driver = messagingv2
[rgw_admin_credentials]
#
# From ceilometer
#
# Access key for Radosgw Admin. (string value)
#access_key = <None>
# Secret key for Radosgw Admin. (string value)
#secret_key = <None>
[service_credentials]
#
# From ceilometer
#
# Region name to use for OpenStack service endpoints. (string value)
# Deprecated group/name - [DEFAULT]/os_region_name
#region_name = <None>
# Type of endpoint in Identity service catalog to use for communication with
# OpenStack services. (string value)
# Allowed values: public, internal, admin, auth, publicURL, internalURL, adminURL
# Deprecated group/name - [service_credentials]/os_endpoint_type
#interface = public
# Authentication type to load (string value)
# Deprecated group/name - [service_credentials]/auth_plugin
#auth_type = <None>
# Config Section from which to load plugin specific options (string value)
#auth_section = <None>
# Authentication URL (string value)
#auth_url = <None>
# Domain ID to scope to (string value)
#domain_id = <None>
# Domain name to scope to (string value)
#domain_name = <None>
# Project ID to scope to (string value)
# Deprecated group/name - [service_credentials]/tenant-id
#project_id = <None>
# Project name to scope to (string value)
# Deprecated group/name - [service_credentials]/tenant-name
#project_name = <None>
# Domain ID containing project (string value)
#project_domain_id = <None>
# Domain name containing project (string value)
#project_domain_name = <None>
# Trust ID (string value)
#trust_id = <None>
# Optional domain ID to use with v3 and v2 parameters. It will be used for both
# the user and project domain in v3 and ignored in v2 authentication. (string
# value)
#default_domain_id = <None>
# Optional domain name to use with v3 API and v2 parameters. It will be used
# for both the user and project domain in v3 and ignored in v2 authentication.
# (string value)
#default_domain_name = <None>
# User id (string value)
#user_id = <None>
# Username (string value)
# Deprecated group/name - [service_credentials]/user-name
#username = <None>
# User's domain id (string value)
#user_domain_id = <None>
# User's domain name (string value)
#user_domain_name = <None>
# User's password (string value)
#password = <None>
[service_types]
#
# From ceilometer
#
# Kwapi service type. (string value)
#kwapi = energy
# Glance service type. (string value)
#glance = image
# Neutron service type. (string value)
#neutron = network
# Neutron load balancer version. (string value)
# Allowed values: v1, v2
#neutron_lbaas_version = v2
# Nova service type. (string value)
#nova = compute
# Radosgw service type. (string value)
#radosgw = object-store
# Swift service type. (string value)
#swift = object-store
[storage]
#
# From ceilometer
#
# Maximum number of connection retries during startup. Set to -1 to specify an
# infinite retry count. (integer value)
# Deprecated group/name - [database]/max_retries
#max_retries = 10
# Interval (in seconds) between retries of connection. (integer value)
# Deprecated group/name - [database]/retry_interval
#retry_interval = 10
[vmware]
#
# From ceilometer
#
# IP address of the VMware vSphere host. (string value)
#host_ip =
# Port of the VMware vSphere host. (port value)
# Minimum value: 0
# Maximum value: 65535
#host_port = 443
# Username of VMware vSphere. (string value)
#host_username =
# Password of VMware vSphere. (string value)
#host_password =
# CA bundle file to use in verifying the vCenter server certificate. (string
# value)
#ca_file = <None>
# If true, the vCenter server certificate is not verified. If false, then the
# default CA truststore is used for verification. This option is ignored if
# "ca_file" is set. (boolean value)
#insecure = false
# Number of times a VMware vSphere API may be retried. (integer value)
#api_retry_count = 10
# Sleep time in seconds for polling an ongoing async task. (floating point
# value)
#task_poll_interval = 0.5
# Optional vim service WSDL location e.g http://<server>/vimService.wsdl.
# Optional over-ride to default location for bug work-arounds. (string value)
#wsdl_location = <None>
[xenapi]
#
# From ceilometer
#
# URL for connection to XenServer/Xen Cloud Platform. (string value)
#connection_url = <None>
# Username for connection to XenServer/Xen Cloud Platform. (string value)
#connection_username = root
# Password for connection to XenServer/Xen Cloud Platform. (string value)
#connection_password = <None>
The event_definitions.yaml
file defines how events received from
other OpenStack components should be translated to Telemetry events.
This file provides a standard set of events and corresponding traits that may be of interest. This file can be modified to add and drop traits that operators may find useful.
---
- event_type: 'compute.instance.*'
traits: &instance_traits
tenant_id:
fields: payload.tenant_id
user_id:
fields: payload.user_id
instance_id:
fields: payload.instance_id
host:
fields: publisher_id.`split(., 1, 1)`
service:
fields: publisher_id.`split(., 0, -1)`
memory_mb:
type: int
fields: payload.memory_mb
disk_gb:
type: int
fields: payload.disk_gb
root_gb:
type: int
fields: payload.root_gb
ephemeral_gb:
type: int
fields: payload.ephemeral_gb
vcpus:
type: int
fields: payload.vcpus
instance_type_id:
type: int
fields: payload.instance_type_id
instance_type:
fields: payload.instance_type
state:
fields: payload.state
os_architecture:
fields: payload.image_meta.'org.openstack__1__architecture'
os_version:
fields: payload.image_meta.'org.openstack__1__os_version'
os_distro:
fields: payload.image_meta.'org.openstack__1__os_distro'
launched_at:
type: datetime
fields: payload.launched_at
deleted_at:
type: datetime
fields: payload.deleted_at
- event_type: compute.instance.exists
traits:
<<: *instance_traits
audit_period_beginning:
type: datetime
fields: payload.audit_period_beginning
audit_period_ending:
type: datetime
fields: payload.audit_period_ending
- event_type: ['volume.exists', 'volume.create.*', 'volume.delete.*', 'volume.resize.*', 'volume.attach.*', 'volume.detach.*', 'volume.update.*', 'snapshot.exists', 'snapshot.create.*', 'snapshot.delete.*', 'snapshot.update.*']
traits: &cinder_traits
user_id:
fields: payload.user_id
project_id:
fields: payload.tenant_id
availability_zone:
fields: payload.availability_zone
display_name:
fields: payload.display_name
replication_status:
fields: payload.replication_status
status:
fields: payload.status
created_at:
fields: payload.created_at
- event_type: ['volume.exists', 'volume.create.*', 'volume.delete.*', 'volume.resize.*', 'volume.attach.*', 'volume.detach.*', 'volume.update.*']
traits:
<<: *cinder_traits
resource_id:
fields: payload.volume_id
host:
fields: payload.host
size:
fields: payload.size
type:
fields: payload.volume_type
replication_status:
fields: payload.replication_status
- event_type: ['snapshot.exists', 'snapshot.create.*', 'snapshot.delete.*', 'snapshot.update.*']
traits:
<<: *cinder_traits
resource_id:
fields: payload.snapshot_id
volume_id:
fields: payload.volume_id
- event_type: ['image_volume_cache.*']
traits:
image_id:
fields: payload.image_id
host:
fields: payload.host
- event_type: ['image.create', 'image.update', 'image.upload', 'image.delete']
traits: &glance_crud
project_id:
fields: payload.owner
resource_id:
fields: payload.id
name:
fields: payload.name
status:
fields: payload.status
created_at:
fields: payload.created_at
user_id:
fields: payload.owner
deleted_at:
fields: payload.deleted_at
size:
fields: payload.size
- event_type: image.send
traits: &glance_send
receiver_project:
fields: payload.receiver_tenant_id
receiver_user:
fields: payload.receiver_user_id
user_id:
fields: payload.owner_id
image_id:
fields: payload.image_id
destination_ip:
fields: payload.destination_ip
bytes_sent:
type: int
fields: payload.bytes_sent
- event_type: orchestration.stack.*
traits: &orchestration_crud
project_id:
fields: payload.tenant_id
user_id:
fields: ['_context_trustor_user_id', '_context_user_id']
resource_id:
fields: payload.stack_identity
- event_type: sahara.cluster.*
traits: &sahara_crud
project_id:
fields: payload.project_id
user_id:
fields: _context_user_id
resource_id:
fields: payload.cluster_id
- event_type: sahara.cluster.health
traits: &sahara_health
<<: *sahara_crud
verification_id:
fields: payload.verification_id
health_check_status:
fields: payload.health_check_status
health_check_name:
fields: payload.health_check_name
health_check_description:
fields: payload.health_check_description
created_at:
type: datetime
fields: payload.created_at
updated_at:
type: datetime
fields: payload.updated_at
- event_type: ['identity.user.*', 'identity.project.*', 'identity.group.*', 'identity.role.*', 'identity.OS-TRUST:trust.*',
'identity.region.*', 'identity.service.*', 'identity.endpoint.*', 'identity.policy.*']
traits: &identity_crud
resource_id:
fields: payload.resource_info
initiator_id:
fields: payload.initiator.id
project_id:
fields: payload.initiator.project_id
domain_id:
fields: payload.initiator.domain_id
- event_type: identity.role_assignment.*
traits: &identity_role_assignment
role:
fields: payload.role
group:
fields: payload.group
domain:
fields: payload.domain
user:
fields: payload.user
project:
fields: payload.project
- event_type: identity.authenticate
traits: &identity_authenticate
typeURI:
fields: payload.typeURI
id:
fields: payload.id
action:
fields: payload.action
eventType:
fields: payload.eventType
eventTime:
fields: payload.eventTime
outcome:
fields: payload.outcome
initiator_typeURI:
fields: payload.initiator.typeURI
initiator_id:
fields: payload.initiator.id
initiator_name:
fields: payload.initiator.name
initiator_host_agent:
fields: payload.initiator.host.agent
initiator_host_addr:
fields: payload.initiator.host.address
target_typeURI:
fields: payload.target.typeURI
target_id:
fields: payload.target.id
observer_typeURI:
fields: payload.observer.typeURI
observer_id:
fields: payload.observer.id
- event_type: objectstore.http.request
traits: &objectstore_request
typeURI:
fields: payload.typeURI
id:
fields: payload.id
action:
fields: payload.action
eventType:
fields: payload.eventType
eventTime:
fields: payload.eventTime
outcome:
fields: payload.outcome
initiator_typeURI:
fields: payload.initiator.typeURI
initiator_id:
fields: payload.initiator.id
initiator_project_id:
fields: payload.initiator.project_id
target_typeURI:
fields: payload.target.typeURI
target_id:
fields: payload.target.id
target_action:
fields: payload.target.action
target_metadata_path:
fields: payload.target.metadata.path
target_metadata_version:
fields: payload.target.metadata.version
target_metadata_container:
fields: payload.target.metadata.container
target_metadata_object:
fields: payload.target.metadata.object
observer_id:
fields: payload.observer.id
- event_type: ['network.*', 'subnet.*', 'port.*', 'router.*', 'floatingip.*', 'pool.*', 'vip.*', 'member.*', 'health_monitor.*', 'healthmonitor.*', 'listener.*', 'loadbalancer.*', 'firewall.*', 'firewall_policy.*', 'firewall_rule.*', 'vpnservice.*', 'ipsecpolicy.*', 'ikepolicy.*', 'ipsec_site_connection.*']
traits: &network_traits
user_id:
fields: _context_user_id
project_id:
fields: _context_tenant_id
- event_type: network.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.network.id', 'payload.id']
- event_type: subnet.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.subnet.id', 'payload.id']
- event_type: port.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.port.id', 'payload.id']
- event_type: router.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.router.id', 'payload.id']
- event_type: floatingip.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.floatingip.id', 'payload.id']
- event_type: pool.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.pool.id', 'payload.id']
- event_type: vip.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.vip.id', 'payload.id']
- event_type: member.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.member.id', 'payload.id']
- event_type: health_monitor.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.health_monitor.id', 'payload.id']
- event_type: healthmonitor.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.healthmonitor.id', 'payload.id']
- event_type: listener.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.listener.id', 'payload.id']
- event_type: loadbalancer.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.loadbalancer.id', 'payload.id']
- event_type: firewall.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.firewall.id', 'payload.id']
- event_type: firewall_policy.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.firewall_policy.id', 'payload.id']
- event_type: firewall_rule.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.firewall_rule.id', 'payload.id']
- event_type: vpnservice.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.vpnservice.id', 'payload.id']
- event_type: ipsecpolicy.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.ipsecpolicy.id', 'payload.id']
- event_type: ikepolicy.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.ikepolicy.id', 'payload.id']
- event_type: ipsec_site_connection.*
traits:
<<: *network_traits
resource_id:
fields: ['payload.ipsec_site_connection.id', 'payload.id']
- event_type: '*http.*'
traits: &http_audit
project_id:
fields: payload.initiator.project_id
user_id:
fields: payload.initiator.id
typeURI:
fields: payload.typeURI
eventType:
fields: payload.eventType
action:
fields: payload.action
outcome:
fields: payload.outcome
id:
fields: payload.id
eventTime:
fields: payload.eventTime
requestPath:
fields: payload.requestPath
observer_id:
fields: payload.observer.id
target_id:
fields: payload.target.id
target_typeURI:
fields: payload.target.typeURI
target_name:
fields: payload.target.name
initiator_typeURI:
fields: payload.initiator.typeURI
initiator_id:
fields: payload.initiator.id
initiator_name:
fields: payload.initiator.name
initiator_host_address:
fields: payload.initiator.host.address
- event_type: '*http.response'
traits:
<<: *http_audit
reason_code:
fields: payload.reason.reasonCode
- event_type: ['dns.domain.create', 'dns.domain.update', 'dns.domain.delete']
traits: &dns_domain_traits
status:
fields: payload.status
retry:
fields: payload.retry
description:
fields: payload.description
expire:
fields: payload.expire
email:
fields: payload.email
ttl:
fields: payload.ttl
action:
fields: payload.action
name:
fields: payload.name
resource_id:
fields: payload.id
created_at:
fields: payload.created_at
updated_at:
fields: payload.updated_at
version:
fields: payload.version
parent_domain_id:
fields: parent_domain_id
serial:
fields: payload.serial
- event_type: dns.domain.exists
traits:
<<: *dns_domain_traits
audit_period_beginning:
type: datetime
fields: payload.audit_period_beginning
audit_period_ending:
type: datetime
fields: payload.audit_period_ending
- event_type: trove.*
traits: &trove_base_traits
state:
fields: payload.state_description
instance_type:
fields: payload.instance_type
user_id:
fields: payload.user_id
resource_id:
fields: payload.instance_id
instance_type_id:
fields: payload.instance_type_id
launched_at:
type: datetime
fields: payload.launched_at
instance_name:
fields: payload.instance_name
state:
fields: payload.state
nova_instance_id:
fields: payload.nova_instance_id
service_id:
fields: payload.service_id
created_at:
type: datetime
fields: payload.created_at
region:
fields: payload.region
- event_type: ['trove.instance.create', 'trove.instance.modify_volume', 'trove.instance.modify_flavor', 'trove.instance.delete']
traits: &trove_common_traits
name:
fields: payload.name
availability_zone:
fields: payload.availability_zone
instance_size:
type: int
fields: payload.instance_size
volume_size:
type: int
fields: payload.volume_size
nova_volume_id:
fields: payload.nova_volume_id
- event_type: trove.instance.create
traits:
<<: [*trove_base_traits, *trove_common_traits]
- event_type: trove.instance.modify_volume
traits:
<<: [*trove_base_traits, *trove_common_traits]
old_volume_size:
type: int
fields: payload.old_volume_size
modify_at:
type: datetime
fields: payload.modify_at
- event_type: trove.instance.modify_flavor
traits:
<<: [*trove_base_traits, *trove_common_traits]
old_instance_size:
type: int
fields: payload.old_instance_size
modify_at:
type: datetime
fields: payload.modify_at
- event_type: trove.instance.delete
traits:
<<: [*trove_base_traits, *trove_common_traits]
deleted_at:
type: datetime
fields: payload.deleted_at
- event_type: trove.instance.exists
traits:
<<: *trove_base_traits
display_name:
fields: payload.display_name
audit_period_beginning:
type: datetime
fields: payload.audit_period_beginning
audit_period_ending:
type: datetime
fields: payload.audit_period_ending
- event_type: profiler.*
traits:
project:
fields: payload.project
service:
fields: payload.service
name:
fields: payload.name
base_id:
fields: payload.base_id
trace_id:
fields: payload.trace_id
parent_id:
fields: payload.parent_id
timestamp:
fields: payload.timestamp
host:
fields: payload.info.host
path:
fields: payload.info.request.path
query:
fields: payload.info.request.query
method:
fields: payload.info.request.method
scheme:
fields: payload.info.request.scheme
db.statement:
fields: payload.info.db.statement
db.params:
fields: payload.info.db.params
- event_type: 'magnum.bay.*'
traits: &magnum_bay_crud
id:
fields: payload.id
typeURI:
fields: payload.typeURI
eventType:
fields: payload.eventType
eventTime:
fields: payload.eventTime
action:
fields: payload.action
outcome:
fields: payload.outcome
initiator_id:
fields: payload.initiator.id
initiator_typeURI:
fields: payload.initiator.typeURI
initiator_name:
fields: payload.initiator.name
initiator_host_agent:
fields: payload.initiator.host.agent
initiator_host_address:
fields: payload.initiator.host.address
target_id:
fields: payload.target.id
target_typeURI:
fields: payload.target.typeURI
observer_id:
fields: payload.observer.id
observer_typeURI:
fields: payload.observer.typeURI
Pipelines describe a coupling between sources of samples and the
corresponding sinks for transformation and publication of the data. They
are defined in the pipeline.yaml
file.
This file can be modified to adjust polling intervals and the samples generated by the Telemetry module.
---
sources:
- name: meter_source
interval: 600
meters:
- "*"
sinks:
- meter_sink
- name: cpu_source
interval: 600
meters:
- "cpu"
sinks:
- cpu_sink
- cpu_delta_sink
- name: disk_source
interval: 600
meters:
- "disk.read.bytes"
- "disk.read.requests"
- "disk.write.bytes"
- "disk.write.requests"
- "disk.device.read.bytes"
- "disk.device.read.requests"
- "disk.device.write.bytes"
- "disk.device.write.requests"
sinks:
- disk_sink
- name: network_source
interval: 600
meters:
- "network.incoming.bytes"
- "network.incoming.packets"
- "network.outgoing.bytes"
- "network.outgoing.packets"
sinks:
- network_sink
sinks:
- name: meter_sink
transformers:
publishers:
- notifier://
- name: cpu_sink
transformers:
- name: "rate_of_change"
parameters:
target:
name: "cpu_util"
unit: "%"
type: "gauge"
scale: "100.0 / (10**9 * (resource_metadata.cpu_number or 1))"
publishers:
- notifier://
- name: cpu_delta_sink
transformers:
- name: "delta"
parameters:
target:
name: "cpu.delta"
growth_only: True
publishers:
- notifier://
- name: disk_sink
transformers:
- name: "rate_of_change"
parameters:
source:
map_from:
name: "(disk\\.device|disk)\\.(read|write)\\.(bytes|requests)"
unit: "(B|request)"
target:
map_to:
name: "\\1.\\2.\\3.rate"
unit: "\\1/s"
type: "gauge"
publishers:
- notifier://
- name: network_sink
transformers:
- name: "rate_of_change"
parameters:
source:
map_from:
name: "network\\.(incoming|outgoing)\\.(bytes|packets)"
unit: "(B|packet)"
target:
map_to:
name: "network.\\1.\\2.rate"
unit: "\\1/s"
type: "gauge"
publishers:
- notifier://
Event pipelines describe a coupling between notification event_types
and the corresponding sinks for publication of the event data. They are
defined in the event_pipeline.yaml
file.
This file can be modified to adjust which notifications to capture and where to publish the events.
---
sources:
- name: event_source
events:
- "*"
sinks:
- event_sink
sinks:
- name: event_sink
transformers:
publishers:
- notifier://
The policy.json
file defines additional access controls that apply
to the Telemetry service.
{
"context_is_admin": "role:admin",
"segregation": "rule:context_is_admin",
"telemetry:get_samples": "",
"telemetry:get_sample": "",
"telemetry:query_sample": "",
"telemetry:create_samples": "",
"telemetry:compute_statistics": "",
"telemetry:get_meters": "",
"telemetry:get_resource": "",
"telemetry:get_resources": "",
"telemetry:events:index": "",
"telemetry:events:show": ""
}
Option = default value | (Type) Help string |
---|---|
[DEFAULT] additional_ingestion_lag = 0 |
(IntOpt) The number of seconds to extend the evaluation windows to compensate the reporting/ingestion lag. |
[DEFAULT] rest_notifier_ca_bundle_certificate_path = None |
(StrOpt) SSL CA_BUNDLE certificate for REST notifier |
[api] alarm_max_actions = -1 |
(IntOpt) Maximum count of actions for each state of an alarm, non-positive number means no limit. |
[api] enable_combination_alarms = False |
(BoolOpt) Enable deprecated combination alarms. |
[api] project_alarm_quota = None |
(IntOpt) Maximum number of alarms defined for a project. |
[api] user_alarm_quota = None |
(IntOpt) Maximum number of alarms defined for a user. |
[evaluator] workers = 1 |
(IntOpt) Number of workers for evaluator service. default value is 1. |
[listener] batch_size = 1 |
(IntOpt) Number of notification messages to wait before dispatching them. |
[listener] batch_timeout = None |
(IntOpt) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely). |
[listener] event_alarm_topic = alarm.all |
(StrOpt) The topic that aodh uses for event alarm evaluation. |
[listener] workers = 1 |
(IntOpt) Number of workers for listener service. default value is 1. |
[notifier] batch_size = 1 |
(IntOpt) Number of notification messages to wait before dispatching them. |
[notifier] batch_timeout = None |
(IntOpt) Number of seconds to wait before dispatching samples when batch_size is not reached (None means indefinitely). |
[notifier] workers = 1 |
(IntOpt) Number of workers for notifier service. default value is 1. |
[service_types] zaqar = messaging |
(StrOpt) Message queue service type. |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
Option = default value | (Type) Help string |
---|---|
[api] panko_is_enabled = None |
(BoolOpt) Set True to redirect events URLs to Panko. Default autodetection by querying keystone. |
[api] panko_url = None |
(StrOpt) The endpoint of Panko to redirect events URLs to Panko API. Default autodetection by querying keystone. |
[coordination] max_retry_interval = 30 |
(IntOpt) Maximum number of seconds between retry to join partitioning group |
[coordination] retry_backoff = 1 |
(IntOpt) Retry backoff factor when retrying to connect withcoordination backend |
[database] sql_expire_samples_only = False |
(BoolOpt) Indicates if expirer expires only samples. If set true, expired samples will be deleted, but residual resource and meter definition data will remain. |
[dispatcher_http] verify_ssl = None |
(StrOpt) The path to a server certificate or directory if the system CAs are not used or if a self-signed certificate is used. Set to False to ignore SSL cert verification. |
[hardware] readonly_user_auth_proto = None |
(StrOpt) SNMPd v3 authentication algorithm of all the nodes running in the cloud |
[hardware] readonly_user_priv_password = None |
(StrOpt) SNMPd v3 encryption password of all the nodes running in the cloud. |
[hardware] readonly_user_priv_proto = None |
(StrOpt) SNMPd v3 encryption algorithm of all the nodes running in the cloud |
Option | Previous default value | New default value |
---|---|---|
[DEFAULT] event_dispatchers |
['database'] |
[] |
[DEFAULT] host |
localhost |
<your_hostname> |
[notification] batch_size |
1 |
100 |
[notification] batch_timeout |
None |
5 |
Deprecated option | New Option |
---|---|
[DEFAULT] use_syslog |
None |
[hyperv] force_volumeutils_v1 |
None |
The Telemetry service collects measurements within OpenStack. Its
various agents and services are configured in the
/etc/ceilometer/ceilometer.conf
file.
To install Telemetry, see the Newton Installation Tutorials and Guides for your distribution.
Note
The common configurations for shared service and libraries, such as database connections and RPC messaging, are described at Common configurations.
Each OpenStack service, Identity, Compute, Networking and so on, has its
own role-based access policies. They determine which user can access
which objects in which way, and are defined in the service’s
policy.json
file.
Whenever an API call to an OpenStack service is made, the service’s
policy engine uses the appropriate policy definitions to determine if
the call can be accepted. Any changes to policy.json
are effective
immediately, which allows new policies to be implemented while the
service is running.
A policy.json
file is a text file in JSON (Javascript Object
Notation) format. Each policy is defined by a one-line statement in the
form "<target>" : "<rule>"
.
The policy target, also named “action”, represents an API call like “start an instance” or “attach a volume”.
Action names are usually qualified. Example: OpenStack Compute features
API calls to list instances, volumes and networks. In
/etc/nova/policy.json
, these APIs are represented by
compute:get_all
, volume:get_all
and network:get_all
,
respectively.
The mapping between API calls and actions is not generally documented.
The policy rule determines under which circumstances the API call is permitted. Usually this involves the user who makes the call (hereafter named the “API user”) and often the object on which the API call operates. A typical rule checks if the API user is the object’s owner.
Warning
Modifying the policy
While recipes for editing policy.json
files are found on blogs,
modifying the policy can have unexpected side effects and is not
encouraged.
A simple rule might look like this:
"compute:get_all" : ""
The target is "compute:get_all"
, the “list all instances” API of the
Compute service. The rule is an empty string meaning “always”. This
policy allows anybody to list instances.
You can also decline permission to use an API:
"compute:shelve": "!"
The exclamation mark stands for “never” or “nobody”, which effectively disables the Compute API “shelve an instance”.
Many APIs can only be called by admin users. This can be expressed by
the rule "role:admin"
. The following policy ensures that only
administrators can create new users in the Identity database:
"identity:create_user" : "role:admin"
You can limit APIs to any role. For example, the Orchestration service
defines a role named heat_stack_user
. Whoever has this role isn’t
allowed to create stacks:
"stacks:create": "not role:heat_stack_user"
This rule makes use of the boolean operator not
. More complex rules
can be built using operators and
, or
and parentheses.
You can define aliases for rules:
"deny_stack_user": "not role:heat_stack_user"
The policy engine understands that "deny_stack_user"
is not an API
and consequently interprets it as an alias. The stack creation policy
above can then be written as:
"stacks:create": "rule:deny_stack_user"
This is taken verbatim from /etc/heat/policy.json
.
Rules can compare API attributes to object attributes. For example:
"os_compute_api:servers:start" : "project_id:%(project_id)s"
states that only the owner of an instance can start it up. The
project_id
string before the colon is an API attribute, namely the project
ID of the API user. It is compared with the project ID of the object (in
this case, an instance); more precisely, it is compared with the
project_id
field of that object in the database. If the two values are
equal, permission is granted.
An admin user always has permission to call APIs. This is how
/etc/keystone/policy.json
makes this policy explicit:
"admin_required": "role:admin or is_admin:1",
"owner" : "user_id:%(user_id)s",
"admin_or_owner": "rule:admin_required or rule:owner",
"identity:change_password": "rule:admin_or_owner"
The first line defines an alias for “user is an admin user”. The
is_admin
flag is only used when setting up the Identity service for
the first time. It indicates that the user has admin privileges granted
by the service token (--os-token
parameter of the keystone
command line client).
The second line creates an alias for “user owns the object” by comparing the API’s user ID with the object’s user ID.
Line 3 defines a third alias admin_or_owner
, combining the two first
aliases with the Boolean operator or
.
Line 4 sets up the policy that a password can only be modified by its owner or an admin user.
As a final example, let’s examine a more complex rule:
"identity:ec2_delete_credential": "rule:admin_required or
(rule:owner and user_id:%(target.credential.user_id)s)"
This rule determines who can use the Identity API “delete EC2
credential”. Here, boolean operators and parentheses combine three
simpler rules. admin_required
and owner
are the same aliases as
in the previous example. user_id:%(target.credential.user_id)s
compares the API user with the user ID of the credential object
associated with the target.
A policy.json
file consists of policies and aliases of the form
target:rule
or alias:definition
, separated by commas and
enclosed in curly braces:
{
"alias 1" : "definition 1",
"alias 2" : "definition 2",
...
"target 1" : "rule 1",
"target 2" : "rule 2",
....
}
Targets are APIs and are written "service:API"
or simply "API"
.
For example, "compute:create"
or "add_image"
.
Rules determine whether the API call is allowed.
Rules can be:
""
(empty string), []
, or "@"
."!"
.Special checks are
<role>:<role name>
, a test whether the API credentials contain
this role.<rule>:<rule name>
, the definition of an alias.http:<target URL>
, which delegates the check to a remote server.
The API is authorized when the server returns True.Developers can define additional special checks.
Two values are compared in the following way:
"value1 : value2"
Possible values are
true
, false
is_admin
API attributes can be project_id
, user_id
or domain_id
.
Target object attributes are fields from the object description in the
database. For example in the case of the "compute:start"
API, the
object is the instance to be started. The policy for starting instances
could use the %(project_id)s
attribute, that is the project that
owns the instance. The trailing s indicates this is a string.
is_admin
indicates that administrative privileges are granted via
the admin token mechanism (the --os-token
option of the keystone
command). The admin token allows initialisation of the identity database
before the admin role exists.
The alias construct exists for convenience. An alias is short name for a complex or hard to understand rule. It is defined in the same way as a policy:
alias name : alias definition
Once an alias is defined, use the rule
keyword to use it in a policy
rule.
You may encounter older policy.json
files that feature a different
syntax, where JavaScript arrays are used instead of boolean operators.
For example, the EC2 credentials rule above would have been written as
follows:
"identity:ec2_delete_credential": [ [ "rule:admin_required ],
[ "rule:owner", "user_id:%(target.credential.user_id)s)" ] ]
The rule is an array of arrays. The innermost arrays are or’ed together, whereas elements inside the innermost arrays are and’ed.
While the old syntax is still supported, we recommend using the newer, more intuitive syntax.
On some deployments, such as ones where restrictive firewalls are in place, you might need to manually configure a firewall to permit OpenStack service traffic.
To manually configure a firewall, you must permit traffic through the ports that each OpenStack service uses. This table lists the default ports that each OpenStack service uses:
OpenStack service | Default ports | Port type |
---|---|---|
Application Catalog (murano ) |
8082 | |
Block Storage (cinder ) |
8776 | publicurl and adminurl |
Compute (nova ) endpoints |
8774 | publicurl and adminurl |
Compute API (nova-api ) |
8773, 8775 | |
Compute ports for access to virtual machine consoles | 5900-5999 | |
Compute VNC proxy for browsers ( openstack-nova-novncproxy) | 6080 | |
Compute VNC proxy for traditional VNC clients (openstack-nova-xvpvncproxy) | 6081 | |
Proxy port for HTML5 console used by Compute service | 6082 | |
Data processing service (sahara ) endpoint |
8386 | publicurl and adminurl |
Identity service (keystone ) administrative endpoint |
35357 | adminurl |
Identity service public endpoint | 5000 | publicurl |
Image service (glance ) API |
9292 | publicurl and adminurl |
Image service registry | 9191 | |
Networking (neutron ) |
9696 | publicurl and adminurl |
Object Storage (swift ) |
6000, 6001, 6002 | |
Orchestration (heat ) endpoint |
8004 | publicurl and adminurl |
Orchestration AWS CloudFormation-compatible API (openstack-heat-api-cfn ) |
8000 | |
Orchestration AWS CloudWatch-compatible API (openstack-heat-api-cloudwatch ) |
8003 | |
Telemetry (ceilometer ) |
8777 | publicurl and adminurl |
To function properly, some OpenStack components depend on other, non-OpenStack services. For example, the OpenStack dashboard uses HTTP for non-secure communication. In this case, you must configure the firewall to allow traffic to and from HTTP.
This table lists the ports that other OpenStack components use:
Service | Default port | Used by |
---|---|---|
HTTP | 80 | OpenStack dashboard (Horizon ) when it is not configured to use secure access. |
HTTP alternate | 8080 | OpenStack Object Storage (swift ) service. |
HTTPS | 443 | Any OpenStack service that is enabled for SSL, especially secure-access dashboard. |
rsync | 873 | OpenStack Object Storage. Required. |
iSCSI target | 3260 | OpenStack Block Storage. Required. |
MySQL database service | 3306 | Most OpenStack components. |
Message Broker (AMQP traffic) | 5672 | OpenStack Block Storage, Networking, Orchestration, and Compute. |
On some deployments, the default port used by a service may fall within the defined local port range of a host. To check a host’s local port range:
$ sysctl net.ipv4.ip_local_port_range
If a service’s default port falls within this range, run the following program to check if the port has already been assigned to another application:
$ lsof -i :PORT
Configure the service to use a different port if the default port is already being used by another application.
The following resources are available to help you run and use OpenStack. The OpenStack community constantly improves and adds to the main features of OpenStack, but if you have any questions, do not hesitate to ask. Use the following resources to get OpenStack support and troubleshoot your installations.
For the available OpenStack documentation, see docs.openstack.org.
To provide feedback on documentation, join and use the openstack-docs@lists.openstack.org mailing list at OpenStack Documentation Mailing List, or report a bug.
The following books explain how to install an OpenStack cloud and its associated components:
The following books explain how to configure and run an OpenStack cloud:
The following books explain how to use the OpenStack dashboard and command-line clients:
The following documentation provides reference and guidance information for the OpenStack APIs:
The following guide provides how to contribute to OpenStack documentation:
During the set up or testing of OpenStack, you might have questions about how a specific task is completed or be in a situation where a feature does not work correctly. Use the ask.openstack.org site to ask questions and get answers. When you visit the https://ask.openstack.org site, scan the recently asked questions to see whether your question has already been answered. If not, ask a new question. Be sure to give a clear, concise summary in the title and provide as much detail as possible in the description. Paste in your command output or stack traces, links to screen shots, and any other information which might be useful.
A great way to get answers and insights is to post your question or problematic scenario to the OpenStack mailing list. You can learn from and help others who might have similar issues. To subscribe or view the archives, go to http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack. If you are interested in the other mailing lists for specific projects or development, refer to Mailing Lists.
The OpenStack wiki contains a broad range of topics but some of the information can be difficult to find or is a few pages deep. Fortunately, the wiki search feature enables you to search by title or content. If you search for specific information, such as about networking or OpenStack Compute, you can find a large amount of relevant material. More is being added all the time, so be sure to check back often. You can find the search box in the upper-right corner of any OpenStack wiki page.
The OpenStack community values your set up and testing efforts and wants your feedback. To log a bug, you must sign up for a Launchpad account at https://launchpad.net/+login. You can view existing bugs and report bugs in the Launchpad Bugs area. Use the search feature to determine whether the bug has already been reported or already been fixed. If it still seems like your bug is unreported, fill out a bug report.
Some tips:
"Kilo release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208
.The following Launchpad Bugs areas are available:
The OpenStack community lives in the #openstack IRC channel on the
Freenode network. You can hang out, ask questions, or get immediate
feedback for urgent and pressing issues. To install an IRC client or use
a browser-based client, go to
https://webchat.freenode.net/. You can
also use Colloquy (Mac OS X, http://colloquy.info/), mIRC (Windows,
http://www.mirc.com/), or XChat (Linux). When you are in the IRC channel
and want to share code or command output, the generally accepted method
is to use a Paste Bin. The OpenStack project has one at
http://paste.openstack.org. Just paste your longer amounts of text or
logs in the web form and you get a URL that you can paste into the
channel. The OpenStack IRC channel is #openstack
on
irc.freenode.net
. You can find a list of all OpenStack IRC channels
at https://wiki.openstack.org/wiki/IRC.
To provide feedback on documentation, join and use the openstack-docs@lists.openstack.org mailing list at OpenStack Documentation Mailing List, or report a bug.
The following Linux distributions provide community-supported packages for OpenStack:
This glossary offers a list of terms and definitions to define a vocabulary for OpenStack-related concepts.
To add to OpenStack glossary, clone the openstack/openstack-manuals
repository and
update the source file doc/common/glossary.rst
through the
OpenStack contribution process.
(Alice, delete)
for a file gives
Alice permission to delete the file.example.com/nova/v1/foobar
.nova-manage
command as
opposed to using the Identity service.k
is the
unit symbol for kilo and the reference artifact is stored near Paris
in the Pavillon de Breteuil in Sèvres, the community chose Kilo as
the release name.ping
command, TCP, and HTTP/HTTPS GET.nova-network
worker daemon; provides
services such as giving an IP address to a booting nova
instance.vlan_interface
option with VLAN
managers.public_interface
option.(RADOS)
A collection of components that provides object storage within Ceph. Similar to OpenStack Object Storage.
vlan_interface
option with VLAN
managers.Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.