[SexAndSubmission] Whitney Wright - Sexual Affliction (...
Black Clover

Red Hat Enterprise Linux OpenStack Platform 6 Administration Guide

Loading...
Red Hat Enterprise Linux OpenStack Platform 6 Administration Guide

Managing a Red Hat Enterprise Linux OpenStack Platform environment

OpenStack Documentation TeamRed Hat

Red Hat Enterprise Linux OpenStack Platform 6 Administration Guide

Managing a Red Hat Enterprise Linux OpenStack Platform environment OpenStack Do cumentatio n Team Red Hat Custo mer Co ntent Services rho s-do [email protected] m

Legal Notice Co pyright © 20 15 Red Hat Inc. The text o f and illustratio ns in this do cument are licensed by Red Hat under a Creative Co mmo ns Attributio n–Share Alike 3.0 Unpo rted license ("CC-BY-SA"). An explanatio n o f CCBY-SA is available at http://creativeco mmo ns.o rg/licenses/by-sa/3.0 / . In acco rdance with CC-BY-SA, if yo u distribute this do cument o r an adaptatio n o f it, yo u must pro vide the URL fo r the o riginal versio n. Red Hat, as the licenso r o f this do cument, waives the right to enfo rce, and agrees no t to assert, Sectio n 4 d o f CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shado wman lo go , JBo ss, MetaMatrix, Fedo ra, the Infinity Lo go , and RHCE are trademarks o f Red Hat, Inc., registered in the United States and o ther co untries. Linux ® is the registered trademark o f Linus To rvalds in the United States and o ther co untries. Java ® is a registered trademark o f Oracle and/o r its affiliates. XFS ® is a trademark o f Silico n Graphics Internatio nal Co rp. o r its subsidiaries in the United States and/o r o ther co untries. MySQL ® is a registered trademark o f MySQL AB in the United States, the Euro pean Unio n and o ther co untries. No de.js ® is an o fficial trademark o f Jo yent. Red Hat So ftware Co llectio ns is no t fo rmally related to o r endo rsed by the o fficial Jo yent No de.js o pen so urce o r co mmercial pro ject. The OpenStack ® Wo rd Mark and OpenStack Lo go are either registered trademarks/service marks o r trademarks/service marks o f the OpenStack Fo undatio n, in the United States and o ther co untries and are used with the OpenStack Fo undatio n's permissio n. We are no t affiliated with, endo rsed o r spo nso red by the OpenStack Fo undatio n, o r the OpenStack co mmunity. All o ther trademarks are the pro perty o f their respective o wners.

Abstract This Administratio n Guide pro vides pro cedures fo r the management o f a Red Hat Enterprise Linux OpenStack Platfo rm enviro nment. Pro cedures to manage bo th user pro jects and the clo ud co nfiguratio n are pro vided.

T able of Cont ent s

T able of Contents .CHAPT . . . . . .ER . . .1.. .INT . . .RO . . .DUCT . . . . .IO . .N. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . 1.1. O PENSTACK DASHBO ARD 3 1.2. CO MMAND-LINE CLIENTS 4 .CHAPT . . . . . .ER . . .2.. .PRO . . . .JECT . . . .S . .AND . . . . USERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . 2.1. MANAG E PRO JECTS 6 2.2. MANAG E USERS 10 .CHAPT . . . . . .ER . . .3.. .VIRT . . . .UAL . . . .MACHINE . . . . . . . . .INST . . . .ANCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 7. . . . . . . . . . 3.1. MANAG E INSTANCES 17 3.2. MANAG E INSTANCE SECURITY 28 3.3. MANAG E FLAVO RS 30 3.4. MANAG E HO ST AG G REG ATES 37 3.5. SCHEDULE HO STS AND CELLS 41 3.6 . EVACUATE INSTANCES 48 .CHAPT . . . . . .ER . . .4.. .IMAG . . . . ES . . . AND . . . . .ST . .O . .RAG . . . .E. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 ........... 4.1. MANAG E IMAG ES 52 4.2. MANAG E VO LUMES 4.3. MANAG E CO NTAINERS

67 84

. . . . . . .ER CHAPT . . .5.. .NET . . . WO . . . RKING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8. 7. . . . . . . . . . 5.1. MANAG E NETWO RK RESO URCES 5.2. CO NFIG URE IP ADDRESSING

87 96

5.3. BRIDG E THE PHYSICAL NETWO RK

98

.CHAPT . . . . . .ER . . .6.. .CLO . . . UD . . . .RESO . . . . .URCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.0. 0. . . . . . . . . . 6 .1. MANAG E STACKS 10 0 6 .2. USING THE TELEMETRY SERVICE

10 3

.CHAPT . . . . . .ER . . .7.. .T. RO . . .UBLESHO . . . . . . . . .O. T . .ING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.0. 9. . . . . . . . . . 7.1. LO G G ING 10 9 7.2. SUPPO RT

113

. . . . . . . . . .A. APPENDIX . . IMAG . . . . .E. CO . . . NFIG . . . . URAT . . . . . .IO . .N. PARAMET . . . . . . . . . ERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.1. 4. . . . . . . . . . . . . . . . . . . .B. APPENDIX . . REVISIO . . . . . . . .N. HIST . . . . .O. RY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2. 2. . . . . . . . . .

1

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

2

CHAPT ER 1 . INT RO DUCT IO N

CHAPTER 1. INTRODUCTION Red Hat Enterprise Linux OpenStack Platform (RHEL OpenStack Platform) provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It offers a massively scalable, fault-tolerant platform for the development of cloud-enabled workloads. This guide provides cloud-management procedures for the following OpenStack services: Block Storage, Compute, D ashboard, Identity, Image, Object Storage, OpenStack Networking, Orchestration, and Telemetry. Procedures for both administrators and project users (end users) are provided; administrator-only procedures are marked as such. You can manage the cloud using either the OpenStack dashboard or the command-line clients. Most procedures can be carried out using either method; some of the more advanced procedures can only be executed on the command line. This guide provides procedures for the dashboard where possible. Note For the complete suite of documentation for RHEL OpenStack Platform, see http://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/

1.1. OPENST ACK DASHBOARD The OpenStack dashboard is a web-based graphical user interface for managing OpenStack services. To access the browser dashboard, the dashboard service must be installed, and you must know the dashboard host name (or IP) and login password. The dashboard URL will be: http://HOSTNAME/dashboard/ Fig u re 1.1. Lo g In Screen

3

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

1.2. COMMAND-LINE CLIENT S Each RHEL OpenStack Platform component typically has its own management client. For example, the Compute service has the no va client. For a complete listing of client commands and parameters, see the " Command-Line Interface Reference" in http://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/ To use a command-line client, the client must be installed and you must first load the environment variables used for authenticating with the Identity service. You can do this by creating an RC (run control) environment file, and placing it in a secure location to run as needed. Run the file using: $ source RC_FileName

Examp le 1.1.

$ source ~/keystonerc_admin

4

CHAPT ER 1 . INT RO DUCT IO N

Note By default, the Packstack utility creates the ad mi n and d emo users, and their keysto ne_ad mi n and keysto ne_d emo RC files.

1.2.1. Aut omat ically Creat e an RC File Using the dashboard, you can automatically generate and download an RC file for the current project user, which enables the use of the OpenStack command-line clients (see Section 1.2, “ Command-line Clients” ). The file's environment variables map to the project and the current project's user. 1. In the dashboard, select the P ro ject tab, and click C o mpute > Access & Securi ty. 2. Select the AP I Access tab, which lists all services that are visible to the project's logged-in user. 3. Click D o wnl o ad O penStack R C fi l e to generate the file. The file name maps to the current user. For example, if you are an 'admin' user, an ad mi n-o penrc. sh file is generated and downloaded through the browser.

1.2.2. Manually Creat e an RC File If you create an RC file manually, you must set the following environment variables: OS_USERNAME=userName OS_TENANT_NAME=tenantName OS_PASSWORD =userPassword OS_AUTH_URL=http://IP:35357/v2.0/ PS1='[\[email protected]\h \W(keystone_userName)]\$ '

Examp le 1.2. The following example file sets the necessary variables for the ad mi n user: export export export export export

OS_USERNAME=admin OS_TENANT_NAME=admin OS_PASSWORD=secretPass OS_AUTH_URL=http://192.0.2.24:35357/v2.0/ PS1='[\[email protected] \h \W(keystone_admin)]\$ '

5

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

CHAPTER 2. PROJECTS AND USERS As a cloud administrator, you can manage both projects and users. Projects are organizational units in the cloud to which you can assign users. Projects are also known as tenants or accounts. You can manage projects and users independently from each other. Users can be members of one or more projects. D uring cloud setup, the operator defines at least one project, user, and role. The operator links the role to the user and the user to the project. Roles define the actions that users can perform. As a cloud administrator, you can create additional projects and users as needed. Additionally, you can add, update, and delete projects and users, assign users to one or more projects, and change or remove these assignments. To enable or temporarily disable a project or user, update that project or user. After you create a user account, you must assign the account to a primary project. Optionally, you can assign the account to additional projects. Before you can delete a user account, you must remove the user account from its primary project.

2.1. MANAGE PROJECT S 2.1.1. Creat e a Project 1. As an admin user in the dashboard, select Id enti ty > P ro jects. 2. Click C reate P ro ject. 3. On the P ro ject Info rmati o n tab, enter a name and description for the project (the Enabl ed check box is selected by default). 4. On the P ro ject Members tab, add members to the project from the Al l Users list. 5. On the Q uo tas tab, specify resource limits for the project. 6. Click C reate P ro ject.

2.1.2. Updat e a Project You can update a project to change its name or description, enable or temporarily disable it, or update its members. 1. As an admin user in the dashboard, select Id enti ty > P ro jects. 2. In the project's Acti o ns column, click the arrow, and click Ed i t P ro ject. 3. In the Ed i t P ro ject window, you can update a project to change its name or description, and enable or temporarily disable the project.

6

CHAPT ER 2 . PRO JECT S AND USERS

4. On the P ro ject Members tab, add members to the project, or remove them as needed. 5. Click Save. Note The Enabl ed check box is selected by default. To temporarily disable the project, clear the Enabl ed check box. To enable a disabled project, select the Enabl ed check box.

2.1.3. Delet e a Project 1. As an admin user in the dashboard, select Id enti ty > P ro jects. 2. Select the project to delete. 3. Click D el ete P ro jects. 4. Click D el ete P ro jects again. Note You cannot undo the delete action.

2.1.4 . Updat e Project Quot as Quotas are maximum limits that can be set per project, so that the project's resources are not exhausted. 1. As an admin user in the dashboard, select Id enti ty > P ro jects. 2. In the project's Acti o ns column, click the arrow, and click Mo d i fy Q uo tas. 3. In the Q uo ta tab, modify project quotas as needed. 4. Click Save.

2.1.5. Manage Project Securit y Security groups are sets of IP filter rules that can be assigned to project instances, and which define networking access to the instance. Security groups are project specific; project members can edit the default rules for their security group and add new rule sets.

7

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

All projects have a default security group that is applied to any instance that has no other defined security group. Unless you change the default values, this security group denies all incoming traffic and allows only outgoing traffic to your instance.

2 .1 .5 .1 . Cre at e a Se curit y Gro up 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Securi ty G ro ups tab, click C reate Securi ty G ro up. 3. Provide a name and description for the group, and click C reate Securi ty G ro up.

2 .1 .5 .2 . Add a Se curit y Gro up Rule By default, rules for a new group only provide outgoing access. You must add new rules to provide additional access. 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Securi ty G ro ups tab, click Manag e R ul es for the security group. 3. Click Ad d R ul e to add a new rule. 4. Specify the rule values, and click Ad d . T ab le 2.1. R eq u ired R u le Field s

Field

Descript ion

Rule Rule typ e. If yo u s p ec ify a rule temp late (fo r examp le, ' SSH' ), its field s are auto matic ally filled in: TCP: Typ ic ally us ed to exc hang e d ata b etween s ys tems , and fo r end us er c o mmunic atio n. UDP: Typ ic ally us ed to exc hang e d ata b etween s ys tems , p artic ularly at the ap p lic atio n level. ICMP: Typ ic ally us ed b y netwo rk d evic es , s uc h as ro uters , to s end erro r o r mo nito ring mes s ag es . Direc tio n

8

Ing res s (inb o und ), o r Eg res s (o utb o und )

CHAPT ER 2 . PRO JECT S AND USERS

Field O p en Po rt

Descript ion

Fo r TCP o r UDP rules , the P o rt o r P o rt R ang e to o p en fo r the rule (s ing le p o rt o r rang e o f p o rts ): Fo r a rang e o f p o rts , enter p o rt values in the Fro m P o rt and T o P o rt field s . Fo r a s ing le p o rt, enter the p o rt value in the P o rt field .

Typ e

The typ e fo r ICMP rules ; mus t b e in the rang e ' -1:255' .

Co d e

The c o d e fo r ICMP rules ; mus t b e in the rang e ' -1:255' .

Remo te

The traffic s o urc e fo r this rule: CIDR (Clas s les s Inter-Do main Ro uting ): IP ad d res s b lo c k, whic h limits ac c es s to IPs within the b lo c k. Enter the CIDR in the So urc e field . Sec urity G ro up : So urc e g ro up that enab les any ins tanc e in the g ro up to ac c es s any o ther g ro up ins tanc e.

2 .1 .5 .3. De le t e a Se curit y Gro up Rule 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Securi ty G ro ups tab, click Manag e R ul es for the security group. 3. Select the security group rule, and click D el ete R ul e. 4. Click D el ete R ul e. Note You cannot undo the delete action.

2 .1 .5 .4 . De le t e a Se curit y Gro up 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Securi ty G ro ups tab, select the group, and click D el ete Securi ty G ro ups. 3. Click D el ete Securi ty G ro ups.

9

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Note You cannot undo the delete action.

2.2. MANAGE USERS 2.2.1. Creat e a User 1. As an admin user in the dashboard, select Id enti ty > Users. 2. Click C reate User. 3. Enter a user name, email, and preliminary password for the user. 4. Select a project from the P ri mary P ro ject list. 5. Select a role for the user from the R o l e list (the default role is _member_). 6. Click C reate User.

2.2.2. Enable or Disable a User You can disable or enable only one user at a time. 1. As an admin user in the dashboard, select Id enti ty > Users. 2. In the Acti o ns column, click the arrow, and select Enabl e User or D i sabl e User. In the Enabl ed column, the value then updates to either T rue or Fal se.

2.2.3. Delet e a User 1. As an admin user in the dashboard, select Id enti ty > Users. 2. Select the users that to delete. 3. Click D el ete Users. 4. Click D el ete Users. Note You cannot undo the delete action.

10

CHAPT ER 2 . PRO JECT S AND USERS

2.2.4 . Manage Roles 2 .2 .4 .1 . Vie w Ro le s To list the available roles: $ keystone role-list +----------------------------------+---------------+ | id | name | +----------------------------------+---------------+ | 71ccc37d41c8491c975ae72676db687f | Member | | 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | 6ecf391421604da985db2f141e46a7c8 | admin | +----------------------------------+---------------+

To get details for a specified role: $ keystone role-get ROLE

Examp le 2.1.

$ keystone role-get admin +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 6ecf391421604da985db2f141e46a7c8 | | name | admin | +----------+----------------------------------+

2 .2 .4 .2 . Cre at e and Assign a Ro le Users can be members of multiple projects. To assign users to multiple projects, create a role and assign that role to a user-project pair. Note Either the name or ID can be used to specify users, roles, or projects.

1. Create the new-ro l e role: $ keystone role-create --name ROLE_NAME

Examp le 2.2.

11

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

$ keystone role-create --name new-role +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | id | 61013e7aa4ba4e00a0a1ab4b14bc6b2a | | name | new-role | +----------+----------------------------------+

2. To assign a user to a project, you must assign the role to a user-project pair. To do this, you need the user, role, and project names or ID s. a. List users: $ keystone user-list

b. List roles: $ keystone role-list

c. List projects: $ keystone tenant-list

3. Assign a role to a user-project pair. $ keystone user-role-add --user USER_NAME --role ROLE_NAME -tenant TENANT_NAME

Examp le 2.3. In this example, you assign the new-ro l e role to the d emo -d emo pair: $ keystone user-role-add --user demo --role new-role -tenant demo

4. Verify the role assignment for the user d emo : $ keystone user-role-list --user USER_NAME --tenant TENANT_NAME

Examp le 2.4 .

$ keystone user-role-list --user demo --tenant demo

12

CHAPT ER 2 . PRO JECT S AND USERS

2 .2 .4 .3. De le t e a Ro le 1. Remove a role from a user-project pair: $ keystone user-role-remove --user USER_NAME --role ROLE -tenant TENANT_NAME

2. Verify the role removal: $ keystone user-role-list --user USER_NAME --tenant TENANT_NAME

If the role was removed, the command output omits the removed role.

2.2.5. View Comput e Quot as for a Project User To list the currently set quota values for a project user (tenant user), run: $ nova quota-show --user USER --tenant TENANT

Examp le 2.5.

$ nova quota-show --user demoUser --tenant demo +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 5 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | | server_groups | 10 | | server_group_members | 10 | +-----------------------------+-------+

2.2.6. Updat e Comput e Quot as for a Project User Pro ced u re 2.1. U p d at e C o mp u t e Q u o t as f o r U ser

13

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

To update a particular quota value, run: $ nova quota-update --user USER --QUOTA_NAME QUOTA_VALUE TENANT

Examp le 2.6 .

$ nova quota-update --user demoUser --floating-ips 10 demo $ nova quota-show --user demoUser --tenant demo +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 10 | | ... | | +-----------------------------+-------+

Note To view a list of options for the quota-update command, run: $ nova help quota-update

2.2.7. Configure Role Access Cont rol A user can have different roles in different tenants. A user can also have multiple roles in the same tenant. The /etc/[SERVICE_CODENAME]/po l i cy. jso n file controls the tasks that users can perform for a given service. For example: /etc/no va/po l i cy. jso n specifies the access policy for the Compute service. /etc/g l ance/po l i cy. jso n specifies the access policy for the Image Service /etc/keysto ne/po l i cy. jso n specifies the access policy for the Identity Service. The default po l i cy. jso n files for the Compute, Identity, and Image services recognize only the admin role; all operations that do not require the admin role are accessible by any user that has any role in a tenant. For example, if you wish to restrict users from performing operations in the Compute service, you must create a role in the Identity service, give users that role, and then modify /etc/no va/po l i cy. jso n so that the role is required for Compute operations.

14

CHAPT ER 2 . PRO JECT S AND USERS

Examp le 2.7. The following line in /etc/no va/po l i cy. jso n specifies that there are no restrictions on which users can create volumes; if the user has any role in a tenant, they can create volumes in that tenant. "volume:create": [],

Examp le 2.8. To restrict creation of volumes to users who had the compute-user role in a particular tenant, you would add " role:compute-user" to the Compute policy: "volume:create": ["role:compute-user"],

Examp le 2.9 . To restrict all Compute service requests to require this role, values in the file might look like the following (not a complete example): {"admin_or_owner": [["role:admin"], ["project_id:% (project_id)s"]], "default": [["rule:admin_or_owner"]], "compute:create": ["role:compute-user"], "compute:create:attach_network": ["role:compute-user"], "compute:create:attach_volume": ["role:compute-user"], "compute:get_all": ["role:compute-user"], "compute:unlock_override": ["rule:admin_api"], "admin_api": [["role:admin"]], "compute_extension:accounts": [["rule:admin_api"]], "compute_extension:admin_actions": [["rule:admin_api"]], "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], "compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]], "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]], "compute_extension:admin_actions:lock": [["rule:admin_or_owner"]], "compute_extension:admin_actions:unlock": [["rule:admin_or_owner"]], "compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]], "compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]], "compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]], "compute_extension:admin_actions:migrateLive":

15

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

[["rule:admin_api"]], "compute_extension:admin_actions:migrate": [["rule:admin_api"]], "compute_extension:aggregates": [["rule:admin_api"]], "compute_extension:certificates": ["role:compute-user"], "compute_extension:cloudpipe": [["rule:admin_api"]], "compute_extension:console_output": ["role:compute-user"], "compute_extension:consoles": ["role:compute-user"], "compute_extension:createserverext": ["role:compute-user"], "compute_extension:deferred_delete": ["role:compute-user"], "compute_extension:disk_config": ["role:compute-user"], "compute_extension:evacuate": [["rule:admin_api"]], "compute_extension:extended_server_attributes": [["rule:admin_api"]], ...

16

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

CHAPTER 3. VIRTUAL MACHINE INSTANCES The RHEL OpenStack Platform allows you to easily manage virtual machine instances in the cloud. OpenStack Compute is the central component that creates, schedules, and manages instances, and exposes this functionality to other OpenStack components. Note The term 'instance' is used by OpenStack to mean a virtual machine instance.

3.1. MANAGE INST ANCES 3.1.1. Creat e an Inst ance Prereq u isit es: Ensure that a network, key pair, and a boot source are available: 1. In the dashboard, select P ro ject. 2. Select Netwo rk > Netwo rks, and ensure there is a private network to which you can attach the new instance (to create a network, see Section 5.1.1, “ Add a Network” ). 3. Select C o mpute > Access & Securi ty > Key P ai rs, and ensure there is a key pair (to create a key pair, see Section 3.2.1, “ Manage Key Pairs” ). 4. Ensure that you have either an image or a volume that can be used as a boot source: To view boot-source images, select the Imag es tab (to create an image, see Section 4.1.1, “ Create an Image” ). To view boot-source volumes, select the Vo l umes tab (to create a volume, see Section 4.2.1.1, “ Create a Volume” ). Pro ced u re 3.1. C reat e an In st an ce 1. In the dashboard, select P ro ject > C o mpute > Instances. 2. Click Launch Instance. 3. Fill out instance fields (those marked with '*' are required), and click Launch when finished.

T ab

Field

Not es

17

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

T ab

Field

Not es

Details

Availab ility Zo ne

Zo nes are lo g ic al g ro up ing s o f c lo ud res o urc es in whic h yo ur ins tanc e c an b e p lac ed . If yo u are uns ure, us e the d efault z o ne (fo r mo re info rmatio n, s ee s ec tio n Sec tio n 3.4, “ Manag e Ho s t Ag g reg ates ” ).

Ins tanc e Name

The name mus t b e uniq ue within the p ro jec t.

Flavo r

The flavo r d etermines what res o urc es the ins tanc e is g iven (fo r examp le, memo ry). Fo r d efault flavo r allo c atio ns and info rmatio n o n c reating new flavo rs , s ee Sec tio n 3.3, “ Manag e Flavo rs ” .

Ins tanc e Bo o t So urc e

Dep end ing o n the item s elec ted , new field s are d is p layed allo wing yo u to s elec t the s o urc e: Imag e s o urc es mus t b e c o mp atib le with O p enStac k (s ee Sec tio n 4.1, “ Manag e Imag es ” ). If a vo lume o r vo lume s o urc e is s elec ted , the s o urc e mus t b e fo rmatted us ing an imag e (s ee Sec tio n 4.2, “ Manag e Vo lumes ” ).

Ac c es s and Sec urity

Netwo rking

18

Key Pair

The s p ec ified key p air is injec ted into the ins tanc e and is us ed to remo tely ac c es s the ins tanc e us ing SSH (if neither a d irec t lo g in info rmatio n o r a s tatic key p air is p ro vid ed ). Us ually o ne key p air p er p ro jec t is c reated .

Sec urity G ro up s

Sec urity g ro up s c o ntain firewall rules whic h filter the typ e and d irec tio n o f the ins tanc e' s netwo rk traffic (fo r mo re info rmatio n o n c o nfig uring g ro up s , s ee Sec tio n 2.1.5, “ Manag e Pro jec t Sec urity” ).

Selec ted Netwo rks

Yo u mus t s elec t at leas t o ne netwo rk. Ins tanc es are typ ic ally as s ig ned to a p rivate netwo rk, and then later g iven a flo ating IP ad d res s to enab le external ac c es s .

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

T ab

Field

Not es

Po s tCreatio n

Cus to miz atio n Sc rip t So urc e

Yo u c an p ro vid e either a s et o f c o mmand s o r a s c rip t file, whic h will run after the ins tanc e is b o o ted (fo r examp le, to s et the ins tanc e ho s tname o r a us er p as s wo rd ). If ' Direc t Inp ut' is s elec ted , write yo ur c o mmand s in the Scri pt D ata field ; o therwis e, s p ec ify yo ur s c rip t file. Not e: Any s c rip t that s tarts with ' #c lo ud -c o nfig ' is interp reted as us ing the c lo ud -c o nfig s yntax (fo r info rmatio n o n the s yntax, s ee http ://c lo ud init.read thed o c s .o rg /en/lates t/to p ic s /examp les .html).

Ad vanc ed O p tio ns

Dis k Partitio n

By d efault, the ins tanc e is b uilt as a s ing le p artitio n and d ynamic ally res iz ed as need ed . Ho wever, yo u c an c ho o s e to manually c o nfig ure the p artitio ns yo urs elf.

Co nfig uratio n Drive

If s elec ted , O p enStac k writes metad ata to a read -o nly c o nfig uratio n d rive that is attac hed to the ins tanc e when it b o o ts (ins tead o f to Co mp ute' s metad ata s ervic e). After the ins tanc e has b o o ted , yo u c an mo unt this d rive to view its c o ntents (enab les yo u to p ro vid e files to the ins tanc e).

3.1.2. Updat e an Inst ance (Act ions menu) You can update an instance by selecting P ro ject > C o mpute > Instance, and selecting an action for that instance in the Actions column. Actions allow you to manipulate the instance in a number of ways:

Act ion

Descript ion

Create Snap s ho t

Snap s ho ts p res erve the d is k s tate o f a running ins tanc e. Yo u c an c reate a s nap s ho t to mig rate the ins tanc e, as well as to p res erve b ac kup c o p ies .

As s o c iate/Dis as s o c i ate Flo ating IP

Yo u mus t as s o c iate an ins tanc e with a flo ating IP (external) ad d res s b efo re it c an c o mmunic ate with external netwo rks , o r b e reac hed b y external us ers . Bec aus e there are a limited numb er o f external ad d res s es in yo ur external s ub nets , it is rec o mmend ed that yo u d is as s o c iate any unus ed ad d res s es .

Ed it Ins tanc e

Up d ate the ins tanc e' s name and as s o c iated s ec urity g ro up s .

Ed it Sec urity G ro up s

Ad d and remo ve s ec urity g ro up s to o r fro m this ins tanc e us ing the lis t o f availab le s ec urity g ro up s (fo r mo re info rmatio n o n c o nfig uring g ro up s , s ee Sec tio n 2.1.5, “ Manag e Pro jec t Sec urity” ).

19

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Act ion

Descript ion

Co ns o le

View the ins tanc e' s c o ns o le in the b ro ws er (allo ws eas y ac c es s to the ins tanc e).

View Lo g

View the mo s t rec ent s ec tio n o f the ins tanc e' s c o ns o le lo g . O nc e o p ened , yo u c an view the full lo g b y c lic king Vi ew Ful l Lo g .

Paus e/Res ume Ins tanc e

Immed iately p aus e the ins tanc e (yo u are no t as ked fo r c o nfirmatio n); the s tate o f the ins tanc e is s to red in memo ry (RAM).

Sus p end /Res ume Ins tanc e

Immed iately s us p end the ins tanc e (yo u are no t as ked fo r c o nfirmatio n); like hyb ernatio n, the s tate o f the ins tanc e is kep t o n d is k.

Res iz e Ins tanc e

Bring up the Res iz e Ins tanc e wind o w (s ee Sec tio n 3.1.3, “ Res iz e an ins tanc e” ).

So ft Reb o o t

G rac efully s to p and res tart the ins tanc e. A s o ft reb o o t attemp ts to g rac efully s hut d o wn all p ro c es s es b efo re res tarting the ins tanc e.

Hard Reb o o t

Sto p and res tart the ins tanc e. A hard reb o o t effec tively jus t s huts d o wn the ins tanc e' s ' p o wer' and then turns it b ac k o n.

Shut O ff Ins tanc e

G rac efully s to p the ins tanc e.

Reb uild Ins tanc e

Us e new imag e and d is k-p artitio n o p tio ns to reb uild the imag e (s hut d o wn, re-imag e, and re-b o o t the ins tanc e). If enc o untering o p erating s ys tem is s ues , this o p tio n is eas ier to try than terminating the ins tanc e and s tarting o ver.

Terminate Ins tanc e

Permanently d es tro y the ins tanc e (yo u are as ked fo r c o nfirmatio n).

For example, you can create and allocate an external address by using the 'Associate Floating IP' action. Pro ced u re 3.2. U p d at e Examp le - Assig n a Flo at in g IP 1. In the dashboard, select P ro ject > C o mpute > Instances. 2. Select the Asso ci ate Fl o ati ng IP action for the instance. Note A floating IP address can only be selected from an already created floating IP pool (see Section 5.2.1, “ Create Floating IP Pools” ).

3. Click '+' and select Allo cat e IP > Asso ciat e.

20

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Note If you do not know the name of the instance, just its IP address (and you do not want to flip through the details of all your instances), you can run the following on the command line: $ nova list --ip IPAddress

Where IPAddress is the IP address you are looking up. $ nova list --ip 192.0.2.0

3.1.3. Resiz e an inst ance To resize an instance (memory or CPU count), you must select a new flavor for the instance that has the right capacity. If you are increasing the size, remember to first ensure that the host has enough space. 1. If you are resizing an instance in a distributed deployment, you must ensure communication between hosts. Set up each host with SSH key authentication so that Compute can use SSH to move disks to other hosts (for example, compute nodes can share the same SSH key). For more information about setting up SSH key authentication, see Section 3.1.4, “ Configure SSH Tunneling between Nodes” . 2. Enable resizing on the original host by setting the following parameter in the /etc/no va/no va. co nf file: [DEFAULT] allow_resize_to_same_host = True

3. In the dashboard, select P ro ject > C o mpute > Instances. 4. Click the instance's Acti o ns arrow, and select R esi ze Instance. 5. Select a new flavor in the New Fl avo r field. 6. If you want to manually partition the instance when it launches (results in a faster build time): a. Select Ad vanced O pti o ns. b. In the D i sk P arti ti o n field, select 'Manual'. 7. Click R esi ze.

3.1.4 . Configure SSH T unneling bet ween Nodes

21

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Warn in g Red Hat does not recommend any particular libvirt security strategy; SSHtunneling steps are provided for user reference only. Only users with ro o t access can set up SSH tunneling.

To migrate instances between nodes using SSH tunneling or to resize instance in a distributed environment, each node must be set up with SSH key authentication so that the Compute service can use SSH to move disks to other nodes. For example, compute nodes could use the same SSH key to ensure communication. Note If the Compute service cannot migrate the instance to a different node, it will attempt to migrate the instance back to its original host. To avoid migration failure in this case, ensure that 'allow_migrate_to_same_host=True' is set in the /etc/no va/no va. co nf file.

To share a key pair between compute nodes: 1. As root on both nodes, make no va a login user: # usermod -s /bin/bash nova

2. On the first compute node, generate a key pair for the no va user: # su nova # ssh-keygen # echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config # cat /var/lib/nova/.ssh/id_rsa.pub >> /var/lib/nova/.ssh/authorized_keys

The key pair, i d _rsa and i d _rsa. pub, is generated in /var/l i b/no va/. ssh. 3. As root, copy the created key pair to the second compute node: # scp /var/lib/nova/.ssh/id_rsa [email protected] computeNodeAddress:~/ # scp /var/lib/nova/.ssh/id_rsa.pub [email protected] computeNodeAddress:~/

4. As root on the second compute node, change the copied key pair's permissions back to 'nova', and then add the key pair into SSH: # # # #

22

chown nova:nova id_rsa chown nova:nova id_rsa.pub su nova mkdir -p /var/lib/nova/.ssh

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

# cp id_rsa /var/lib/nova/.ssh/ # cat id_rsa.pub >> /var/lib/nova/.ssh/authorized_keys # echo 'StrictHostKeyChecking no' >> /var/lib/nova/.ssh/config

5. Ensure that the no va user can now log into each node without using a password: # su nova # ssh [email protected] computeNodeAddress

6. As root on both nodes, restart both libvirt and the Compute services: # systemctl restart libvirtd.service # systemctl restart openstack-nova-compute.service

3.1.5. Connect t o an Inst ance 3.1 .5 .1 . Acce ss using t he Dashbo ard Co nso le The console allows you a way to directly access your instance within the dashboard. 1. In the dashboard, select C o mpute > Instances. 2. Click the instance's Mo re button and select C o nso l e. Fig u re 3.1. C o n so le Access

23

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

3. Log in using the image's user name and password (for example, a CirrOS image uses 'cirros'/'cubswin:)'). Note Red Hat Enterprise Linux guest images typically do not allow direct console access; you must SSH into the instance (see Section 3.1.5.4, “ SSH into an Instance” ).

3.1 .5 .2 . Dire ct ly Co nne ct t o a VNC Co nso le You can directly access an instance's VNC console using a URL returned by no va g etvnc-co nso l e command. B ro wser To obtain a browser URL, use: $ nova get-vnc-console INSTANCE_ID novnc Java C lien t To obtain a Java-client URL, use: $ nova get-vnc-console INSTANCE_ID xvpvnc

24

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Note no va-xvpvncvi ewer provides a simple example of a Java client. To download the client, use: # git clone http://github.com/cloudbuilders/novaxvpvncviewer # cd nova-xvpvncviewer/viewer # make

Run the viewer with the instance's Java-client URL: # java -jar VncViewer.jar URL

This tool is provided only for customer convenience, and is not officially supported by Red Hat.

3.1 .5 .3. Dire ct ly Co nne ct t o a Se rial Co nso le You can directly access an instance's serial port using a websocket client. Serial connections are typically used as a debugging tool (for example, instances can be accessed even if the network configuration fails). To obtain a serial URL for a running instance, use: $ nova get-serial-console INSTANCE_ID Note no vaco nso l e provides a simple example of a websocket client. To download the client, use: # git clone http://github.com/larsks/novaconsole/ # cd novaconsole

Run the client with the instance's serial URL: # python console-client-poll.py URL

This tool is provided only for customer convenience, and is not officially supported by Red Hat.

However, depending on your installation, the administrator may need to first set up the no va-seri al pro xy service. The proxy service is a websocket proxy that allows connections to OpenStack Compute serial ports.

25

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Pro ced u re 3.3. In st all an d C o n f ig u re n o va- serialp ro xy 1. Install the no va-seri al pro xy service: # yum install openstack-nova-serialproxy

2. Update the seri al _co nso l e section in /etc/no va/no va. co nf: a. Enable the no va-seri al pro xy service: $ openstack-config --set /etc/nova/nova.conf serial_console enabled true

b. Specify the string used to generate URLS provided by the no va g etseri al -co nso l e command. $ openstack-config --set /etc/nova/nova.conf serial_console base_url ws://PUBLIC_IP:6083/

Where PUBLIC_IP is the public IP address of the host running the no vaseri al pro xy service. c. Specify the IP address on which the instance serial console should listen (string). $ openstack-config --set /etc/nova/nova.conf serial_console listen 0.0.0.0

d. Specify the address to which proxy clients should connect (string). $ openstack-config --set /etc/nova/nova.conf serial_console proxyclient_address ws://HOST_IP:6083/

Where HOST_IP is the IP address of your Compute host.

Examp le 3.1. En ab led n o va- serialp ro xy

[serial_console] enabled=true base_url=ws://192.0.2.0:6083/ listen=0.0.0.0 proxyclient_address=192.0.2.3

3. Restart Compute services:

26

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

# openstack-service restart nova

4. Start the no va-seri al pro xy service: # systemctl enable openstack-nova-serialproxy # systemctl start openstack-nova-serialproxy

5. Restart any running instances, to ensure that they are now listening on the right sockets. 6. Open the firewall for serial-console port connections. Serial ports are set using [seri al _co nso l e] po rt_rang e in /etc/no va/no va. co nf; by default, the range is 10000:20000. Update iptables with: # iptables -I INPUT 1 -p tcp --dport 10000:20000 -j ACCEPT

3.1 .5 .4 . SSH int o an Inst ance 1. Ensure that the instance's security group has an SSH rule (see Section 2.1.5, “ Manage Project Security” ). 2. Ensure the instance has a floating IP address (external address) assigned to it (see Section 3.2.2, “ Create, Assign, and Release Floating IP Addresses” ). 3. Obtain the instance's key-pair certificate. The certificate is downloaded when the key pair is created; if you did not create the key pair yourself, ask your administrator (see Section 3.2.1, “ Manage Key Pairs” ). 4. On your local machine, load the key-pair certificate into SSH. For example: $ ssh-add ~/.ssh/os-key.pem

5. You can now SSH into the file with the user supplied by the image. The following example command shows how to SSH into the Red Hat Enterprise Linux guest image with the user 'cloud-user': $ ssh [email protected] 192.0.2.24 Note You can also use the certificate directly. For example: $ ssh -i /myDir/os-key.pem [email protected] 192.0.2.24

27

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

3.1.6. View Inst ance Usage The following usage statistics are available: Per Project To view instance usage per project, select P ro ject > C o mpute > O vervi ew. A usage summary is immediately displayed for all project instances. You can also view statistics for a specific period of time by specifying the date range and clicking Submi t. Per Hypervisor If logged in as an administrator, you can also view information for all projects. Click Ad mi n > System and select one of the tabs. For example, the R eso urce Usag e tab offers a way to view reports for a distinct time period. You might also click Hypervi so rs to view your current vCPU, memory, or disk statistics. Note The 'vCPU Usage' value ('x of y') reflects the number of total vCPUs of all virtual machines (x) and the total number of hypervisor cores (y).

3.1.7. Delet e an Inst ance 1. In the dashboard, select P ro ject > C o mpute > Instances, and select your instance. 2. Click T ermi nate Instance. Note D eleting an instance does not delete its attached volumes; you must do this separately (see Section 4.2.1.4, “ D elete a Volume” ).

3.2. MANAGE INST ANCE SECURIT Y You can manage access to an instance by assigning it the correct security group (set of firewall rules) and key pair (enables SSH user access). Further, you can assign a floating IP address to an instance to enable external network access. The sections below outline how to create and manage key pairs and floating IP addresses. For information on managing security groups, see Section 2.1.5, “ Manage Project Security” .

3.2.1. Manage Key Pairs

28

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Key pairs provide SSH access to the instances. Each time a key pair is generated, its certificate is downloaded to the local machine and can be distributed to users. Typically, one key pair is created for each project (and used for multiple instances). You can also import an existing key pair into OpenStack.

3.2 .1 .1 . Cre at e a Ke y Pair 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Key P ai rs tab, click C reate Key P ai r. 3. Specify a name in the Key P ai r Name field, and click C reate Key P ai r. When the key pair is created, a key pair file is automatically downloaded through the browser. Save this file for later connections from external machines. For commandline SSH connections, you can load this file into SSH by executing: # ssh-add ~/.ssh/OS-Key.pem

3.2 .1 .2 . Im po rt a Ke y Pair 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Key P ai rs tab, click Impo rt Key P ai r. 3. Specify a name in the Key P ai r Name field, and copy and paste the contents of your public key into the P ubl i c Key field. 4. Click Impo rt Key P ai r.

3.2 .1 .3. De le t e a Ke y Pair 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Key P ai rs tab, click the key's D el ete Key P ai r button.

3.2.2. Creat e, Assign, and Release Float ing IP Addresses By default, an instance is given an internal IP address when it is first created. However, you can enable access through the public network by creating and assigning a floating IP address (external address). You can change an instance's associated IP address regardless of the instance's state.

29

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Projects have a limited range of floating IP address that can be used (by default, the limit is 50), so you should release these addresses for reuse when they are no longer needed. Floating IP addresses can only be allocated from an existing floating IP pool (see Section 5.2.1, “ Create Floating IP Pools” ). Pro ced u re 3.4 . Allo cat e a Flo at in g IP t o t h e Pro ject 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Fl o ati ng IP s tab, click Al l o cate IP to P ro ject. 3. Select a network from which to allocate the IP address in the P o o l field. 4. Click Al l o cate IP . Pro ced u re 3.5. Assig n a Flo at in g IP 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Fl o ati ng IP s tab, click the address' Asso ci ate button. 3. Select the address to be assigned in the IP ad d ress field. Note If no addresses are available, you can click the + button to create a new address.

4. Select the instance to be associated in the P o rt to be Asso ci ated field. An instance can only be associated with one floating IP address. 5. Click Asso ci ate. Pro ced u re 3.6 . R elease a Flo at in g IP 1. In the dashboard, select P ro ject > C o mpute > Access & Securi ty. 2. On the Fl o ati ng IP s tab, click the address' menu arrow (next to the Asso ci ate/D i sasso ci ate button. 3. Select R el ease Fl o ati ng IP .

3.3. MANAGE FLAVORS

30

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Each created instance is given a flavor (resource template), which determines the instance's size and capacity. Flavors can also specify secondary ephemeral storage, swap disk, metadata to restrict usage, or special project access (none of the default flavors have these additional attributes defined). T ab le 3.1. D ef au lt Flavo rs

Name

vCPUs

RAM

Root Disk Siz e

m1.tiny

1

512 MB

1 GB

m1.s mall

1

20 48 MB

20 G B

m1.med ium

2

40 9 6 MB

40 G B

m1.larg e

4

8 19 2 MB

80 GB

m1.xlarg e

8

16 38 4 MB

16 0 G B

The majority of end users will be able to use the default flavors. However, you might need to create and manage specialized flavors. For example, you might: Change default memory and capacity to suit the underlying hardware needs. Add metadata to force a specific I/O rate for the instance or to match a host aggregate. Note Behavior set using image properties overrides behavior set using flavors (for more information, see Section 4.1, “ Manage Images” ).

3.3.1. Updat e Configurat ion Permissions By default, only administrators can create flavors or view the complete flavor list (select Ad mi n > System > Fl avo rs). To allow all users to configure flavors, specify the following in the /etc/no va/po l i cy. jso n file (no va-api server): "co mpute_extensi o n: fl avo rmanag e": "",

3.3.2. Creat e a Flavor 1. As an admin user in the dashboard, select Ad mi n > System > Fl avo rs. 2. Click C reate Fl avo r, and specify the following fields:

31

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

T ab

Field

Descript ion

Flavo r Info rmati on

Name

Uniq ue name.

ID

Uniq ue ID. The d efault value, ' auto ' , g enerates a UUID4 value, b ut yo u c an als o manually s p ec ify an integ er o r UUID4 value.

VCPUs

Numb er o f virtual CPUs .

RAM (MB)

Memo ry (in meg ab ytes ).

Ro o t Dis k (G B)

Ep hemeral d is k s iz e (in g ig ab ytes ); to us e the native imag e s iz e, s p ec ify ' 0 ' . This d is k is no t us ed if ' Ins tanc e Bo o t So urc e=Bo o t fro m Vo lume' .

Ep ehemeral Dis k (G B)

Sec o nd ary ep hemeral d is k s iz e (in g ig ab ytes ).

Swap Dis k (MB)

Swap d is k s iz e (in meg ab ytes ).

Selec ted Pro jec ts

Pro jec ts whic h c an us e the flavo r. If no p ro jec ts are s elec ted , all p ro jec ts have ac c es s (' Pub lic =Yes ' ).

Flavo r Ac c es s

3. Click C reate Fl avo r.

3.3.3. Updat e General At t ribut es 1. As an admin user in the dashboard, select Ad mi n > System > Fl avo rs. 2. Click the flavor's Ed i t Fl avo r button. 3. Update the values, and click Save.

3.3.4 . Updat e Flavor Met adat a In addition to editing general attributes, you can add metadata to a flavor ('extra_specs'), which can help fine-tune instance usage. For example, you might want to set the maximumallowed bandwidth or disk writes. Pre-defined keys determine hardware support or quotas. Pre-defined keys are limited by the hypervisor you are using (for libvirt, see Table 3.2, “ Libvirt Metadata” ).

32

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Both pre-defined and user-defined keys can determine instance scheduling. For example, you might specify 'SpecialComp=True'; any instance with this flavor can then only run in a host aggregate with the same key-value combination in its metadata (see Section 3.4, “ Manage Host Aggregates” ).

3.3.4 .1 . Vie w Me t adat a 1. As an admin user in the dashboard, select Ad mi n > System > Fl avo rs. 2. Click the flavor's Metad ata link ('Yes' or 'No'). All current values are listed on the right-hand side under Exi sti ng Metad ata.

3.3.4 .2 . Add Me t adat a You specify a flavor's metadata using a key/value pair. 1. As an admin user in the dashboard, select Ad mi n > System > Fl avo rs. 2. Click the flavor's Metad ata link ('Yes' or 'No'). All current values are listed on the right-hand side under Exi sti ng Metad ata. 3. Under Avai l abl e Metad ata, click on the O ther field, and specify the key you want to add (see Table 3.2, “ Libvirt Metadata” ). 4. Click the + button; you can now view the new key under Exi sti ng Metad ata. 5. Fill in the key's value in its right-hand field. Fig u re 3.2. Flavo r Met ad at a

6. When finished with adding key-value pairs, click Save. T ab le 3.2. Lib virt Met ad at a

33

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Key

Descript ion

hw: action Ac tio n that c o nfig ures s up p o rt limits p er ins tanc e. Valid ac tio ns are:

cpu_max_so ckets - Maximum s up p o rted CPU s o c kets .

cpu_max_co res - Maximum s up p o rted CPU c o res . cpu_max_thread s - Maximum s up p o rted CPU thread s .

cpu_so ckets - Preferred numb er o f CPU s o c kets . cpu_co res - Preferred numb er o f CPU c o res . cpu_thread s - Preferred numb er o f CPU thread s . seri al _po rt_co unt - Maximum s erial p o rts p er ins tanc e. Examp le: ' hw:c p u_max_s o c kets =2'

hw: NUMA_def Definitio n o f NUMA to p o lo g y fo r the ins tanc e. Fo r flavo rs who s e RAM and vCPU allo c atio ns are larg er than the s iz e o f NUMA no d es in the c o mp ute ho s ts , d efining NUMA to p o lo g y enab les ho s ts to b etter utiliz e NUMA and imp ro ve p erfo rmanc e o f the g ues t O S. NUMA d efinitio ns d efined thro ug h the flavo r o verrid e imag e d efinitio ns . Valid d efinitio ns are:

numa_no d es - Numb er o f NUMA no d es to exp o s e to the ins tanc e. Sp ec ify ' 1' to ens ure imag e NUMA s etting s are o verrid d en.

numa_mempo l i cy - Memo ry allo c atio n p o lic y. Valid p o lic ies are: s tric t - Mand ato ry fo r the ins tanc e' s RAM allo c atio ns to c o me fro m the NUMA no d es to whic h it is b o und (d efault if numa_no d es is s p ec ified ). p referred - The kernel c an fall b ac k to us ing an alternative no d e. Us eful when the numa_no d es is s et to ' 1' .

numa_cpus. 0 - Map p ing o f vCPUs N-M to NUMA no d e 0 (c o mma-s ep arated lis t).

34

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Key

numa_cpus. 1 - Map p ing o f vCPUs N-M to NUMA Descript no d e 1ion (c o mma-s ep arated lis t). numa_mem. 0 - Map p ing N G B o f RAM to NUMA no d e 0.

numa_mem. 1 - Map p ing N G B o f RAM to NUMA no d e 1.

numa_cpu. N and numa_mem. N are o nly valid if numa_no d es is s et. Ad d itio nally, they are o nly req uired if the ins tanc e' s NUMA no d es have an as ymetric al allo c atio n o f CPUs and RAM (imp o rtant fo r s o me NFV wo rklo ad s ). No te: If the values o f numa_cpu o r numa_mem. N s p ec ify mo re than that availab le, an exc ep tio n is rais ed . Examp le when the ins tanc e has 8 vCPUs and 4G B RAM: hw:numa_no d es =2 hw:numa_c p us .0 =0 ,1,2,3,4,5 hw:numa_c p us .1=6 ,7 hw:numa_mem.0 =3 hw:numa_mem.1=1

hw: watchd o g _acti o n

The s c hed uler lo o ks fo r a ho s t with 2 NUMA no d es with the ab ility to run 6 CPUs + 3 G B o f RAM o n o ne no d e, and 2 CPUS + 1 G B o f RAM o n ano ther no d e. If a ho s t has a s ing le NUMA no d e with c ap ab ility to run 8 CPUs and 4 G B o f RAM, it will no t b e c o ns id ered a valid matc h. The s ame lo g ic is ap p lied in the s c hed uler reg ard les s o f the numa_mempo l i cy s etting . An ins tanc e watc hd o g d evic e c an b e us ed to trig g er an ac tio n if the ins tanc e s o meho w fails (o r hang s ). Valid ac tio ns are:

d i sabl ed - The d evic e is no t attac hed (d efault value). pause - Paus e the ins tanc e. po wero ff - Fo rc efully s hut d o wn the ins tanc e. reset - Fo rc efully res et the ins tanc e. no ne - Enab le the watc hd o g , b ut d o no thing if the ins tanc e fails . Examp le: ' hw:watc hd o g _ac tio n=p o wero ff'

35

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Key

Descript ion

hw_rng : action A rand o m-numb er g enerato r d evic e c an b e ad d ed to an ins tanc e us ing its imag e p ro p erties (s ee hw_rng _mo d el in the " Co mmand -Line Interfac e Referenc e" in RHEL O p enStac k Platfo rm d o c umentatio n). If the d evic e has b een ad d ed , valid ac tio ns are:

al l o wed - If ' True' , the d evic e is enab led ; if ' Fals e' , d is ab led . By d efault, the d evic e is d is ab led .

rate_bytes - Maximum numb er o f b ytes the ins tanc e' s kernel c an read fro m the ho s t to fill its entro p y p o o l every rate_peri o d (integ er).

rate_peri o d - Duratio n o f the read p erio d in s ec o nd s (integ er). Examp le: ' hw_rng :allo wed =True' .

hw_vi d eo : ram_max_mb Maximum p ermitted RAM to b e allo wed fo r vid eo d evic es (in MB). Examp le: ' hw:ram_max_mb =6 4'

q uo ta: option Enfo rc ing limit fo r the ins tanc e. Valid o p tio ns are:

cpu_peri o d - Time p erio d fo r enfo rc ing c p u_q uo ta (in mic ro s ec o nd s ). Within the s p ec ified c p u_p erio d , eac h vCPU c anno t c o ns ume mo re than c p u_q uo ta o f runtime. The value mus t b e in rang e [10 0 0 , 10 0 0 0 0 0 ]; ' 0 ' means ' no value' .

cpu_q uo ta - Maximum allo wed b and wid th (in mic ro s ec o nd s ) fo r the vCPU in eac h c p u_p erio d . The value mus t b e in rang e [10 0 0 , 18 446 7440 7370 9 551]. ' 0 ' means ' no value' ; a neg ative value means that the vCPU is no t c o ntro lled . cpu_q uo ta and cpu_peri o d c an b e us ed to ens ure that all vCPUs run at the s ame s p eed .

cpu_shares - Share o f CPU time fo r the d o main. The value o nly has meaning when weig hted ag ains t o ther mac hine values in the s ame d o main. That is , an ins tanc e with a flavo r with ' 20 0 ' will g et twic e as muc h mac hine time as an ins tanc e with ' 10 0 ' .

36

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Key

d i sk_read _bytes_sec - Maximum d is k read s in

Descript b ytes ion p er s ec o nd .

d i sk_read _i o ps_sec - Maximum read I/O o p eratio ns p er s ec o nd .

d i sk_wri te_bytes_sec - Maximum d is k writes in b ytes p er s ec o nd .

d i sk_wri te_i o ps_sec - Maximum write I/O o p eratio ns p er s ec o nd .

d i sk_to tal _bytes_sec - Maximum to tal thro ug hp ut limit in b ytes p er s ec o nd .

d i sk_to tal _i o ps_sec - Maximum to tal I/O o p eratio ns p er s ec o nd .

vi f_i nbo und _averag e - Des ired averag e o f inc o ming traffic .

vi f_i nbo und _burst - Maximum amo unt o f traffic that c an b e rec eived at vi f_i nbo und _peak s p eed . vi f_i nbo und _peak - Maximum rate at whic h inc o ming traffic c an b e rec eived .

vi f_o utbo und _averag e - Des ired averag e o f o utg o ing traffic .

vi f_o utbo und _burst - Maximum amo unt o f traffic that c an b e s ent at vi f_o utbo und _peak s p eed . vi f_o utbo und _peak - Maximum rate at whic h o utg o ing traffic c an b e s ent. Examp le: ' q uo ta:vif_inb o und _averag e=10 240 '

3.4 . MANAGE HOST AGGREGAT ES A single Compute deployment can be partitioned into logical groups for performance or administrative purposes. OpenStack uses the following terms: Host aggregates - A host aggregate creates logical units in a OpenStack deployment by grouping together hosts. Aggregates are assigned Compute hosts and associated metadata; a host can be in more than one host aggregate. Only administrators can see or create host aggregates. An aggregate's metadata is commonly used to provide information for use with the Compute scheduler (for example, limiting specific flavors or images to a subset of hosts). Metadata specified in a host aggregate will limit the use of that host to any instance that has the same metadata specified in its flavor.

37

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Administrators can use host aggregates to handle load balancing, enforce physical isolation (or redundancy), group servers with common attributes, or separate out classes of hardware. When you create an aggregate, a zone name must be specified, and it is this name which is presented to the end user. Availability zones - An availability zone is the end-user view of a host aggregate. An end user cannot view which hosts make up the zone, nor see the zone's metadata; the user can only see the zone's name. End users can be directed to use specific zones which have been configured with certain capabilities or within certain areas.

3.4 .1. Enable Host Aggregat e Scheduling By default, host-aggregate metadata is not used to filter instance usage; you must update the Compute scheduler's configuration to enable metadata usage: 1. Edit the /etc/no va/no va. co nf file (you must have either ro o t or no va user permissions). 2. Ensure that the scheduler_default_filters parameter contains: 'AggregateInstanceExtraSpecsFilter' for host aggregate metadata. For example: scheduler_default_filters=AggregateInstanceExtraSpecsFilter,Re tryFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,I magePropertiesFilter,CoreFilter

'AvailabilityZ oneFilter' for availability host specification when launching an instance. For example: scheduler_default_filters=AvailabilityZoneFilter,RetryFilter,R amFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropert iesFilter,CoreFilter

3. Save the configuration file.

3.4 .2. View Availabilit y Zones or Host Aggregat es As an admin user in the dashboard, select Ad mi n > System > Ho st Ag g reg ates. All currently defined aggregates are listed in the Ho st Ag g reg ates section; all zones are in the Avai l abi l i ty Zo nes section.

3.4 .3. Add a Host Aggregat e

38

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

1. As an admin user in the dashboard, select Ad mi n > System > Ho st Ag g reg ates. All currently defined aggregates are listed in the Ho st Ag g reg ates section. 2. Click C reate Ho st Ag g reg ate. 3. Add a name for the aggregate in the Name field, and a name by which the end user should see it in the Avai l abi l i ty Zo ne field. 4. Click Manag e Ho sts wi thi n Ag g reg ate. 5. Select a host for use by clicking its + icon. 6. Click C reate Ho st Ag g reg ate.

3.4 .4 . Updat e a Host Aggregat e 1. As an admin user in the dashboard, select Ad mi n > System > Ho st Ag g reg ates. All currently defined aggregates are listed in the Ho st Ag g reg ates section. 2. To update the instance's: Name or availability zone: Click the aggregate's Ed i t Ho st Ag g reg ate button. Update the Name or Avai l abi l i ty Zo ne field, and click Save. Assigned hosts: Click the aggregate's arrow icon under Acti o ns. Click Manag e Ho sts. Change a host's assignment by clicking its + or - icon. When finished, click Save. Metatdata: Click the aggregate's arrow icon under Acti o ns. Click the Upd ate Metad ata button. All current values are listed on the right-hand side under Exi sti ng Metad ata.

39

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Under Avai l abl e Metad ata, click on the O ther field, and specify the key you want to add. Use predefined keys (see Table 3.3, “ Host Aggregate Metadata” ) or add your own (which will only be valid if exactly the same key is set in an instance's flavor). Click the + button; you can now view the new key under Exi sti ng Metad ata. N o t e: Remove a key by clicking its - icon. Click Save. T ab le 3.3. H o st Ag g reg at e Met ad at a

Key

Descript ion

cpu_al l o cati o n _rati o

Sets allo c atio n ratio o f virtual CPU to p hys ic al CPU. Dep end s o n the Ag g reg ateC o reFi l ter filter b eing s et fo r the Co mp ute s c hed uler.

d i sk_al l o cati o n_rati o

Sets allo c atio n ratio o f Virtual d is k to p hys ic al d is k. Dep end s o n the Ag g reg ateD i skFi l ter filter b eing s et fo r the Co mp ute s c hed uler.

fi l ter_tenant_i d

If s p ec ified , the ag g reg ate o nly ho s ts this tenant (p ro jec t). Dep end s o n the Ag g reg ateMul ti T enancyIso l ati o n filter b eing s et fo r the Co mp ute s c hed uler.

ram_al l o cati o n _rati o

Sets allo c atio n ratio o f virtual RAM to p hys ic al RAM. Dep end s o n the Ag g reg ateR amFi l ter filter b eing s et fo r the Co mp ute s c hed uler.

3.4 .5. Delet e a Host Aggregat e 1. As an admin user in the dashboard, select Ad mi n > System > Ho st Ag g reg ates. All currently defined aggregates are listed in the Ho st Ag g reg ates section. 2. Remove all assigned hosts from the aggregate: 1. Click the aggregate's arrow icon under Acti o ns. 2. Click Manag e Ho sts. 3. Remove all hosts by clicking their - icon.

40

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

4. When finished, click Save. 3. Click the aggregate's arrow icon under Acti o ns. 4. Click D el ete Ho st Ag g reg ate in this and the next dialog screen.

3.5. SCHEDULE HOST S AND CELLS The Compute scheduling service determines on which cell or host (or host aggregate), an instance will be placed. As an administrator, you can influence where the scheduler will place an instance. For example, you might want to limit scheduling to hosts in a certain group or with the right RAM. You can configure the following components: Filters - D etermine the initial set of hosts on which an instance might be placed (see Section 3.5.1, “ Configure Scheduling Filters” ). Weights - When filtering is complete, the resulting set of hosts are prioritized using the weighting system. The highest weight has the highest priority (see Section 3.5.2, “ Configure Scheduling Weights” ). Scheduler service - There are a number of configuration options in the /etc/no va/no va. co nf file (on the scheduler host), which determine how the scheduler executes its tasks, and handles weights and filters. There is both a host and a cell scheduler. For a list of these options, refer to the " Configuration Reference" (RHEL OpenStack Platform D ocumentation). In the following diagram, both host 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling. Fig u re 3.3. Sch ed u lin g H o st s

3.5.1. Configure Scheduling Filt ers

41

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

You define which filters you would like the scheduler to use in the sched ul er_d efaul t_fi l ters option (/etc/no va/no va. co nf file; you must have either ro o t or no va user permissions). Filters can be added or removed. By default, the following filters are configured to run in the scheduler: scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter ,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,Server GroupAntiAffinityFilter,ServerGroupAffinityFilter

Some filters use information in parameters passed to the instance in: The no va bo o t command, see the " Command-Line Interface Reference" in http://access.redhat.com/documentation/enUS/Red_Hat_Enterprise_Linux_OpenStack_Platform/. The instance's flavor (see Section 3.3.4, “ Update Flavor Metadata” ) The instance's image (see Appendix A, Image Configuration Parameters). All available filters are listed in the following table. T ab le 3.4 . Sch ed u lin g Filt ers

Filt er

Descript ion

Ag g reg ateCo reFilter Us es the ho s t-ag g reg ate metad ata key cpu_al l o cati o n_rati o to filter o ut ho s ts exc eed ing the o ver-c o mmit ratio (virtual CPU to p hys ic al CPU allo c atio n ratio ); o nly valid if a ho s t ag g reg ate is s p ec ified fo r the ins tanc e. If this ratio is no t s et, the filter us es the cpu_al l o cati o n_rati o value in the /etc/no va/no va. co nf file. The d efault value is ' 16 .0 ' (16 virtual CPU c an b e allo c ated p er p hys ic al CPU). Ag g reg ateDis kFilter Us es the ho s t-ag g reg ate metad ata key d i sk_al l o cati o n_rati o to filter o ut ho s ts exc eed ing the o ver-c o mmit ratio (virtual d is k to p hys ic al d is k allo c atio n ratio ); o nly valid if a ho s t ag g reg ate is s p ec ified fo r the ins tanc e. If this ratio is no t s et, the filter us es the d i sk_al l o cati o n_rati o value in the /etc/no va/no va. co nf file. The d efault value is ' 1.0 ' (o ne virtual d is k c an b e allo c ated fo r eac h p hys ic al d is k).

42

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Filt er

Descript ion

Ag g reg ateImag ePro p erties I s o latio n

O nly p as s es ho s ts in ho s t ag g reg ates who s e metad ata matc hes the ins tanc e' s imag e metad ata; o nly valid if a ho s t ag g reg ate is s p ec ified fo r the ins tanc e. Fo r mo re info rmatio n, s ee Sec tio n 4.1.1, “ Create an Imag e” .

Ag g reg ateIns tanc eExtraSp e c s Filter

Metad ata in the ho s t ag g reg ate mus t matc h the ho s t' s flavo r metad ata. Fo r mo re info rmatio n, s ee Sec tio n 3.3.4, “ Up d ate Flavo r Metad ata” .

Ag g reg ateMultiTenanc yIs o la tio n

A ho s t with the s p ec ified fi l ter_tenant_i d c an o nly c o ntain ins tanc es fro m that tenant (p ro jec t). Not e: The tenant c an s till p lac e ins tanc es o n o ther ho s ts .

Ag g reg ateRamFilter Us es the ho s t-ag g reg ate metad ata key ram_al l o cati o n_rati o to filter o ut ho s ts exc eed ing the o ver c o mmit ratio (virtual RAM to p hys ic al RAM allo c atio n ratio ); o nly valid if a ho s t ag g reg ate is s p ec ified fo r the ins tanc e. If this ratio is no t s et, the filter us es the ram_al l o cati o n_rati o value in the /etc/no va/no va. co nf file. The d efault value is ' 1.5' (1.5 RAM c an b e allo c ated fo r eac h p hys ic al RAM). AllHo s ts Filter

Pas s es all availab le ho s ts (ho wever, d o es no t d is ab le o ther filters ).

Availab ilityZo neFilter

Filters us ing the ins tanc e' s s p ec ified availab ility z o ne.

Co mp uteCap ab ilities Filter

Ens ures Co mp ute metad ata is read c o rrec tly. Anything b efo re the ' :' is read as a names p ac e. Fo r examp le, ' q uo ta:c p u_p erio d ' us es ' q uo ta' as the names p ac e and ' c p u_p erio d ' as the key.

Co mp uteFilter

Pas s es o nly ho s ts that are o p eratio nal and enab led .

Co reFilter

Us es the cpu_al l o cati o n_rati o in the /etc/no va/no va. co nf file to filter o ut ho s ts exc eed ing the o ver c o mmit ratio (virtual CPU to p hys ic al CPU allo c atio n ratio ). The d efault value is ' 16 .0 ' (16 virtual CPU c an b e allo c ated p er p hys ic al CPU).

DifferentHo s tFilter

Enab les an ins tanc e to b uild o n a ho s t that is d ifferent fro m o ne o r mo re s p ec ified ho s ts . Sp ec ify ' d ifferent' ho s ts us ing the no va bo o t o p tio n --d i fferent_ho st o p tio n.

43

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Filt er

Descript ion

Dis kFilter

Us es d i sk_al l o cati o n_rati o in the /etc/no va/no va. co nf file to filter o ut ho s ts exc eed ing the o ver c o mmit ratio (virtual d is k to p hys ic al d is k allo c atio n ratio ). The d efault value is ' 1.0 ' (o ne virtual d is k c an b e allo c ated fo r eac h p hys ic al d is k).

Imag ePro p erties Filter

O nly p as s es ho s ts that matc h the ins tanc e' s imag e p ro p erties . Fo r mo re info rmatio n, s ee Sec tio n 4.1.1, “ Create an Imag e” .

Is o lated Ho s ts Filter

Pas s es o nly is o lated ho s ts running is o lated imag es that are s p ec ified in the /etc/no va/no va. co nf file us ing isolated_hosts and isolated_images (c o mmas ep arated values ).

Js o nFilter Rec o g nis es and us es an ins tanc e' s c us to m JSO N filters : Valid o p erato rs are: =, < , > , in, < =, > =, no t, o r, and Rec o g nis ed variab les are: $free_ram_mb , $free_d i sk_mb , $to tal _usabl e_ram_mb , $vcpus_to tal , $vcpus_used The filter is s p ec fied as a q uery hint in the no va bo o t c o mmand . Fo r examp le:

--hi nt q uery= ' [' >= ' , ' $free_d i sk_mb' , 20 0 * 10 24 ]'

44

Metric Filter

Filters o ut ho s ts with unavailab le metric s .

NUMATo p o lo g yFilter

Filters o ut ho s ts b as ed o n its NUMA to p o lo g y; if the ins tanc e has no to p o lo g y d efined , any ho s t c an b e us ed . The filter tries to matc h the exac t NUMA to p o lo g y o f the ins tanc e to tho s e o f the ho s t (it d o es no t attemp t to p ac k the ins tanc e o nto the ho s t). The filter als o lo o ks at the s tand ard o vers ub s c rip tio n limits fo r eac h NUMA no d e, and p ro vid es limits to the c o mp ute ho s t ac c o rd ing ly.

RamFilter

Us es ram_al l o cati o n_rati o in the /etc/no va/no va. co nf file to filter o ut ho s ts exc eed ing the o ver c o mmit ratio (virtual RAM to p hys ic al RAM allo c atio n ratio ). The d efault value is ' 1.5' (1.5 RAM c an b e allo c ated fo r eac h p hys ic al RAM).

RetryFilter

Filters o ut ho s ts that have failed a s c hed uling attemp t; valid if sched ul er_max_attempts is g reater than z ero (b y d efault,sched ul er_max_attempts= 3).

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Filt er

Descript ion

SameHo s tFilter Pas s es o ne o r mo re s p ec ified ho s ts ; s p ec ify ho s ts fo r the ins tanc e us ing the --hint same_host o p tio n fo r no va bo o t. ServerG ro up AffinityFilter O nly p as s es ho s ts fo r a s p ec ific s erver g ro up : G ive the s erver g ro up the affinity p o lic y ( no va serverg ro up-create --po l i cy affi ni ty groupName ). Build the ins tanc e with that g ro up ( no va bo o t o p tio n -hi nt g ro up= UUID ).

ServerG ro up AntiAffinityFilter O nly p as s es ho s ts in a s erver g ro up that d o no t alread y ho s t an ins tanc e: G ive the s erver g ro up the anti-affinity p o lic y ( no va

server-g ro up-create --po l i cy anti affi ni ty groupName ). Build the ins tanc e with that g ro up ( no va bo o t o p tio n -hi nt g ro up= UUID ).

Simp leCIDRAffinityFilter O nly p as s es ho s ts o n the s p ec ified IP s ub net rang e s p ec ified b y the ins tanc e' s ci d r and bui l d _new_ho st_i p hints . Examp le:

--hi nt bui l d _near_ho st_i p= 19 2. 0 . 2. 0 --hi nt ci d r= /24

3.5.2. Configure Scheduling Weight s Both cells and hosts can be weighted for scheduling; the host or cell with the largest weight (after filtering) is selected. All weighers are given a multiplier that is applied after normalising the node's weight. A node's weight is calculated as: w1_mul ti pl i er * no rm(w1) + w2_mul ti pl i er * no rm(w2) + . . . You can configure weight options in the scheduler host's /etc/no va/no va. co nf file (must have either ro o t or no va user permissions).

3.5 .2 .1 . Co nfigure We ight Opt io ns fo r Ho st s

45

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

You can define the host weighers you would like the scheduler to use in the [D EFAULT ] sched ul er_wei g ht_cl asses option. Valid weighers are: no va. sched ul er. wei g hts. ram - Weighs the host's available RAM. no va. sched ul er. wei g hts. metri cs - Weighs the host's metrics. no va. sched ul er. wei g hts. al l _wei g hers - Uses all host weighers (default). T ab le 3.5. H o st Weig h t O p t io n s

Weighe r

O pt ion

Descript ion

All

[DEFAULT] s c hed uler_ho s t_s ub s et_s i ze

Defines the s ub s et s iz e fro m whic h a ho s t is s elec ted (integ er); mus t b e at leas t 1. A value o f 1 s elec ts the firs t ho s t returned b y the weig hing func tio ns . Any value les s than 1 is ig no red and 1 is us ed ins tead (integ er value).

metric s

[metric s ] req uired Sp ec ifies ho w to hand le metric s in [metri cs] wei g ht_setti ng that are unavailab le:

T rue - Metric s are req uired ; if unavailab le, an exc ep tio n is rais ed . To avo id the exc ep tio n, us e the Metri cFi l ter filter in the

[D EFAULT ]sched ul er_d efaul t_fi l ters o p tio n.

Fal se - The unavailab le metric is treated as a neg ative fac to r in the weig hing p ro c es s ; the returned value is s et b y wei g ht_o f_unavai l abl e .

46

metric s

[metric s ] weig ht_o f_unavailab le

Us ed as the weig ht if any metric in [metri cs] wei g ht_setti ng is unavailab le; valid if [metri cs]req ui red = Fal se .

metric s

[metric s ] weig ht_multip lier

Mulitp lier us ed fo r weig hing metric s . By d efault, wei g ht_mul ti pl i er= 1. 0 and s p read s ins tanc es ac ro s s p o s s ib le ho s ts . If this value is neg ative, the ho s t with lo wer metric s is p rio ritiz ed , and ins tanc es are s tac ked in ho s ts .

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Weighe r

O pt ion

metric s

[metric s ] weig ht_s etting

Descript ion

Sp ec ifies metric s and the ratio with whic h they are weig hed ; us e a c o mma-s ep arated lis t o f ' metric =ratio ' p airs . Valid metric names are:

cpu. freq uency - Current CPU freq uenc y cpu. user. ti me - CPU us er mo d e time cpu. kernel . ti me - CPU kernel time cpu. i d l e. ti me - CPU id le time cpu. i o wai t. ti me - CPU I/O wait time cpu. user. percent - CPU us er mo d e p erc entag e

cpu. kernel . percent - CPU kernel p erc entag e

cpu. i d l e. percent - CPU id le p erc entag e cpu. i o wai t. percent - CPU I/O wait p erc entag e

cpu. percent - G eneric CPU utiliz atio n Examp le:

wei g ht_setti ng = cpu. user. ti me= 1. 0

ram

[DEFAULT] ram_weig ht_multip lier

Multip lier fo r RAM (flo ating p o int). By d efault, ram_wei g ht_mul ti pl i er= 1. 0 and s p read s ins tanc es ac ro s s p o s s ib le ho s ts . If this value is neg ative, the ho s t with les s RAM is p rio ritiz ed , and ins tanc es are s tac ked in ho s ts .

3.5 .2 .2 . Co nfigure We ight Opt io ns fo r Ce lls You define which cell weighers you would like the scheduler to use in the [cel l s] sched ul er_wei g ht_cl asses option (/etc/no va/no va. co nf file; you must have either ro o t or no va user permissions) Valid weighers are: nova.cells.weights.all_weighers - Uses all cell weighers(default).

47

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

nova.cells.weights.mute_child - Weighs whether a child cell has not sent capacity or capability updates for some time. nova.cells.weights.ram_by_instance_type - Weighs the cell's available RAM. nova.cells.weights.weight_offset - Evaluates a cell's weight offset. Note: A cell's weight offset is specified using --wo ffset in the no va-manag e cel l create command. T ab le 3.6 . C ell Weig h t O p t io n s

Weighers

O pt ion

Descript ion

mute_c hild

[c ells ] mute_weig ht_mult ip lier

Multip lier fo r ho s ts whic h have b een s ilent fo r s o me time (neg ative flo ating p o int). By d efault, this value is ' -10 .0 ' .

mute_c hild

[c ells ] mute_weig ht_valu e

Weig ht value g iven to s ilent ho s ts (p o s itive flo ating p o int). By d efault, this value is ' 10 0 0 .0 ' .

ram_b y_ins tanc e _typ e

[c ells ] ram_weig ht_multi p lier

Multip lier fo r weig hing RAM (flo ating p o int). By d efault, this value is ' 1.0 ' , and s p read s ins tanc es ac ro s s p o s s ib le c ells . If this value is neg ative, the c ell with fewer RAM is p rio ritiz ed , and ins tanc es are s tac ked in c ells .

weig ht_o ffs et

[c ells ] o ffs et_weig ht_mul tip lier

Multip lier fo r weig hing c ells (flo ating p o int). Enab les the ins tanc e to s p ec ify a p referred c ell (flo ating p o int) b y s etting its weig ht o ffs et to 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 (hig hes t weig ht is p rio ritiz ed ). By d efault, this value is ' 1.0 ' .

3.6. EVACUAT E INST ANCES If you want to move an instance from a dead or shut-down compute node to a new host server in the same environment (for example, because the server needs to be swapped out), you can evacuate it using no va evacuate. An evacuation is only useful if the instance disks are on shared storage or if the instance disks are Block Storage volumes. Otherwise, the disks will not be accessible and cannot be accessed by the new compute node. An instance can only be evacuated from a server if the server is shut down; if the server is not shut down, the evacuate command will fail.

48

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

Note If you have a functioning compute node, and you want to: Make a static copy (not running) of an instance for backup purposes or to copy the instance to a different environment, make a snapshot using no va i mag ecreate (see Migrate a Static Instance). Move an instance in a static state (not running) to a host in the same environment (shared storage not needed), migrate it using no va mi g rate (see Migrate a Static Instance). Move an instance in a live state (running) to a host in the same environment, migrate it using no va l i ve-mi g rati o n (see Migrate a Live (running) Instance).

3.6.1. Evacuat e One Inst ance Evacuate an instance using: # nova evacuate [--password pass] [--on-shared-storage] instance_name [target_host]

Where: --passwo rd pass - Admin password to set for the evacuated instance (cannot be used if --o n-shared -sto rag e is specified). If a password is not specified, a random password is generated and output when evacuation is complete. --o n-shared -sto rag e - Indicates that all instance files are on shared storage. instance_name - Name of the instance to be evacuated. target_host - Host to which the instance is evacuated; if you do not specify the host, the Compute scheduler selects one for you. You can find possible hosts using:

# nova host-list | grep compute For example: # nova evacuate myDemoInstance Compute2_OnEL7.myDomain

3.6.2. Evacuat e All Inst ances Evacuate all instances on a specified host using:

49

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

# nova host-evacuate instance_name [--target target_host] [--onshared-storage] source_host

Where: --targ et target_host - Host to which the instance is evacuated; if you do not specify the host, the Compute scheduler selects one for you. You can find possible hosts using:

# nova host-list | grep compute

--o n-shared -sto rag e - Indicates that all instance files are on shared storage. source_host - Name of the host to be evacuated. For example: # nova host-evacuate --target Compute2_OnEL7.localdomain myDemoHost.localdomain

3.6.3. Configure Shared St orage If you are using shared storage, this procedure exports the instances directory for the Compute service to the two nodes, and ensures the nodes have access. The directory path is set in the state_path and instances_path parameters in the /etc/no va/no va. co nf file. This procedure uses the default value, which is /var/l i b/no va/i nstances. Only users with ro o t access can set up shared storage. 1. O n t h e co n t ro ller h o st : a. Ensure the /var/l i b/no va/i nstances directory has read-write access by the Compute service user (this user must be the same across controller and nodes). For example: drwxr-xr-x.

9 nova nova 4096 Nov

5 20:37 instances

b. Add the following lines to the /etc/expo rts file; switch out node1_IP and node2_IP for the IP addresses of the two compute nodes: /var/lib/nova/instances node1_IP(rw,sync,fsid=0,no_root_squash) /var/lib/nova/instances node2_IP(rw,sync,fsid=0,no_root_squash)

c. Export the /var/l i b/no va/i nstances directory to the compute nodes. # exportfs -avr

50

CHAPT ER 3. VIRT UAL MACHINE INST ANCES

d. Restart the NFS server: # systemctl restart nfs-server

2. O n each co mp u t e n o d e: a. Ensure the /var/l i b/no va/i nstances directory exists locally. b. Add the following line to the /etc/fstab file: controllerName:/var/lib/nova/instances /var/lib/nova/instances nfs4 defaults 0 0

c. Mount the controller's instance directory (all devices listed in /etc/fstab ): # mount -a -v

d. Ensure qemu can access the directory's images: # chmod o+x /var/lib/nova/instances

e. Ensure that the node can see the instances directory with: # ls -ld /var/lib/nova/instances drwxr-xr-x. 9 nova nova 4096 Nov /var/lib/nova/instances

5 20:37

Note You can also run the following to view all mounted devices: # df -k

51

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

CHAPTER 4. IMAGES AND STORAGE This chapter discusses the steps you can follow to manage images and storage in RHEL OpenStack Platform. A virtual machine image is a file that contains a virtual disk which has a bootable operating system installed on it. Virtual machine images are supported in different formats. The following are the formats available on RHEL OpenStack Platform: R AW - Unstructured disk image format. Q C O W2 - D isk format supported by QEMU emulator. ISO - Sector-by-sector copy of the data on a disk, stored in a binary file. AKI - Indicates an Amazon Kernel Image. AMI - Indicates an Amazon Machine Image. AR I - Indicates an Amazon RAMD isk Image. VD I - D isk format supported by VirtualBox virtual machine monitor and the QEMU emulator. VHD - Common disk format used by virtual machine monitors from VMWare, VirtualBox, and others. VMD K - D isk format supported by many common virtual machine monitors. While we don't normally think of ISO as a virtual machine image format, since ISOs contain bootable filesystems with an installed operating system, you can treat them the same as you treat other virtual machine image files. To download the official Red Hat Enterprise Linux cloud images, you require a valid Red Hat Enterprise Linux subscription: Red Hat Enterprise Linux 7 KVM Guest Image Red Hat Enterprise Linux 6 KVM Guest

4 .1. MANAGE IMAGES The OpenStack Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or snapshot a server image, and immediately store it away. Stored images can be used as a template to get new servers up and running quickly and more consistently than installing a server operating

52

CHAPT ER 4 . IMAG ES AND ST O RAG E

system and individually configuring additional services.

4 .1.1. Creat e an Image This section provides you with the steps to manually create OpenStack-compatible images in .qcow2 format using Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 ISO files.

4 .1 .1 .1 . Use a KVM Gue st Im age Wit h RHEL Ope nSt ack Plat fo rm You can use a ready RHEL KVM guest qcow2 image available at: RHEL 7 KVM Guest Image or RHEL 6.6 KVM Guest Image. These images are configured with cl o ud -i ni t and must take advantage of ec2-compatible metadata services for provisioning SSH keys in order to function properly. Note For the KVM guest images: The ro o t account in the image is disabled, but sud o access is granted to a special user named cl o ud -user. There is no ro o t password set for this image. The ro o t password is locked in /etc/shad o w by placing ! ! in the second field.

For an OpenStack instance, it is recommended that you generate an ssh keypair from the OpenStack dashboard or command line and use that key combination to perform an SSH public authentication to the instance as ro o t. When the instance is launched, this public key will be injected to it. You can then authenticate using the private key downloaded while creating the keypair. If you want to create custom Red Hat Enterprise Linux images, see Section 4.1.1.2.1, “ Create a Red Hat Enterprise Linux 7 Image” or Section 4.1.1.2.2, “ Create a Red Hat Enterprise Linux 6 Image” .

4 .1 .1 .2 . Cre at e Cust o m Re d Hat Ent e rprise Linux Im age s Prereq u isit es: Linux host machine to create an image. This can be any machine on which you can install and run the Linux packages. libvirt, virt-manager (run command yum g ro upi nstal l @ vi rtual i zati o n). This installs all packages necessary for creating a guest operating system.

53

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Libguestfs tools (run command yum i nstal l l i bg uestfs-to o l s-c). This installs a set of tools for accessing and modifying virtual machine images. A Red Hat Enterprise Linux ISO file (see RHEL 7.0 Binary D VD or RHEL 6.6 Binary D VD ). Text editor, if you want to change the ki ckstart files. Note In the following procedures, all commands with the [[email protected] ho st]# prompt should be run on your host machine.

4 .1.1.2.1. C reat e a R ed H at En t erp rise Lin u x 7 Imag e This section provides you with the steps to manually create an OpenStack-compatible image in .qcow2 format using a Red Hat Enterprise Linux 7 ISO file. 1. Start the installation using vi rt-i nstal l : [[email protected] host]# qemu-img create -f qcow2 rhel7.qcow2 8G [[email protected] host]# virt-install --virt-type kvm --name rhel7 --ram 2048 \ --cdrom /tmp/rhel-server-7.0-x86_64-dvd.iso --disk rhel7.qcow2,format=qcow2 \ --network=bridge:virbr0 --graphics vnc,listen=0.0.0.0 \ --noautoconsole --os-type=linux --os-variant=rhel7

This launches an instance and starts the installation process. Note If the instance does not launch automatically, run the following command to view the console: [[email protected] host]# virt-viewer rhel7

2. Set up the virtual machine as follows: a. At the initial Installer boot menu, choose the Instal l R ed Hat Enterpri se Li nux 7. 0 option.

54

CHAPT ER 4 . IMAG ES AND ST O RAG E

b. Choose the appropriate Lang uag e and Keybo ard options. c. When prompted about which type of devices your installation uses, choose Auto -d etected i nstal l ati o n med i a.

d. When prompted about which type of installation destination, choose Lo cal Stand ard D i sks.

55

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

For other storage options, choose Auto mati cal l y co nfi g ure parti ti o ni ng . e. For software selection, choose Mi ni mal Instal l . f. For network and hostname, choose eth0 for network and choose a ho stname for your device. The default ho stname is l o cal ho st. l o cal d o mai n. g. Choose the root password.

56

CHAPT ER 4 . IMAG ES AND ST O RAG E

The installation process completes and the C o mpl ete! screen appears. 3. After the installation is complete, reboot the instance and log in as the root user. 4. Update the /etc/sysco nfi g /netwo rk-scri pts/i fcfg -eth0 file so it only contains the following values: TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no

5. Reboot the machine. 6. Register the machine with the Content D elivery Network: # subscription-manager register

a. Enter your Customer Portal user name and password when prompted: Username: [email protected] example.com Password:

b. Find entitlement pools containing the channel: # subscription-manager list --available | grep -A8 "Red Hat Enterprise Linux Server"

57

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

c. Use the pool identifiers located in the previous step to attach the R ed Hat Enterpri se Li nux Server entitlement to the system: # subscription-manager attach --pool=pool_id

d. Enable the required channel: # subscription-manager repos --enable=rhel-7-server-rpms

For RHEL OpenStack Platform 6, the required channels are rhel -7server-o penstack-6 . 0 -rpms and rhel -7-server-rh-co mmo n-rpms. Note For more information, see Software Repository Configuration

7. Update the system. # yum -y update

8. Install the cl o ud -i ni t packages. # yum install -y cloud-utils-growpart cloud-init

9. Edit the /etc/cl o ud /cl o ud . cfg configuration file and under cl o ud _i ni t_mo d ul es add: - resolv-conf

The reso l v-co nf option automatically configures the reso l v. co nf configuration file when an instance boots for the first time. This file contains information related to the instance such as nameservers, d o mai n, and other options. 10. Add the following line to /etc/sysco nfi g /netwo rk to avoid problems accessing the EC2 metadata service. NOZEROCONF=yes

11. To ensure the console messages appear in the Lo g tab on the dashboard and the no va co nso l e-l o g output, add the following boot option to the /etc/d efaul t/g rub file: GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8"

58

CHAPT ER 4 . IMAG ES AND ST O RAG E

Run the following command: # grub2-mkconfig -o /boot/grub2/grub.cfg

The output is as follows: Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-229.7.2.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0229.7.2.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-121.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-121.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescueb82a3044fb384a3f9aeacf883474428b Found initrd image: /boot/initramfs-0-rescueb82a3044fb384a3f9aeacf883474428b.img done

12. Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it. # subscription-manager repos --disable=* # subscription-manager unregister # yum clean all

13. Power off the instance: # poweroff

14. Reset and clean the image using the vi rt-sysprep command so it can be to create instances without issues: [[email protected] host]# virt-sysprep -d rhel7

15. Reduce the size of the image using the vi rt-sparsi fy command. This command converts any free space within the disk image back to free space within the host: [[email protected] host]# virt-sparsify --compress rhel7.qcow2 rhel7cloud.qcow2

This creates a new rhel 7-cl o ud . q co w2 file in the location from where the command is run. The rhel 7-cl o ud . q co w2 image file is ready to be uploaded to the Image service. For more information on uploading this image to your OpenStack deployment using the dashboard, see Section 4.1.2, “ Upload an Image” . 4 .1.1.2.2. C reat e a R ed H at En t erp rise Lin u x 6 Imag e

59

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

This section provides you with the steps to manually create an OpenStack-compatible image in .qcow2 format using a Red Hat Enterprise Linux 6 ISO file. 1. Start the installation using vi rt-i nstal l : [[email protected] host]# qemu-img create -f qcow2 rhel6.qcow2 4G [[email protected] host]# virt-install --connect=qemu:///system -network=bridge:virbr0 \ --name=rhel6.6 --os-type linux --os-variant rhel6 \ --disk path=rhel6.qcow2,format=qcow2,size=10,cache=none \ --ram 4096 --vcpus=2 --check-cpu --accelerate \ --hvm --cdrom=rhel-server-6.6-x86_64-dvd.iso

This launches an instance and starts the installation process. Note If the instance does not launch automatically, run the following command to view the console: [[email protected] host]## virt-viewer rhel6

2. Set up the virtual machines as follows: a. At the initial Installer boot menu, choose the Instal l o r upg rad e an exi sti ng system option.

Step through the installation prompts. Accept the defaults.

60

CHAPT ER 4 . IMAG ES AND ST O RAG E

The installation checks for disc and performs a Med i a C heck. When the check is a Success, it ejects the disc. b. Choose the appropriate Lang uag e and Keybo ard options. c. When prompted about which type of devices your installation uses, choose Basi c Sto rag e D evi ces.

d. Choose a ho stname for your device. The default ho stname is l o cal ho st. l o cal d o mai n. e. Set ti mezo ne and ro o t passwo rd . f. Based on the space on the disk, choose the type of installation.

61

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

g. Choose the Basi c Server install, which installs an SSH server.

The installation process completes and C o ng ratul ati o ns, yo ur R ed Hat Enterpri se Li nux i nstal l ati o n i s co mpl ete screen appears. 3. Reboot the instance and log in as the ro o t user. 4. Update the /etc/sysco nfi g /netwo rk-scri pts/i fcfg -eth0 file so it only contains the following values:

62

CHAPT ER 4 . IMAG ES AND ST O RAG E

contains the following values: TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no

5. Reboot the machine. 6. Register the machine with the Content D elivery Network: # subscription-manager register

a. Enter your Customer Portal user name and password when prompted: Username: [email protected] example.com Password:

b. Find entitlement pools containing the channel: # subscription-manager list --available | grep -A8 "Red Hat Enterprise Linux Server"

c. Use the pool identifiers located in the previous step to attach the R ed Hat Enterpri se Li nux Server entitlement to the system: # subscription-manager attach --pool=pool_id

d. Enable the required channel: # subscription-manager repos --enable=rhel-6-server-rpms

For RHEL OpenStack Platform 6, the required channels are rhel -7server-o penstack-6 . 0 -rpms and rhel -6 -server-rh-co mmo n-rpms. Note For more information, see Software Repository Configuration

7. Update the system. # yum -y update

8. Install the cl o ud -i ni t packages.

63

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

# yum install -y cloud-utils-growpart cloud-init

9. Edit the /etc/cl o ud /cl o ud . cfg configuration file and under cl o ud _i ni t_mo d ul es add: - resolv-conf

The reso l v-co nf option automatically configures the reso l v. co nf configuration file when an instance boots for the first time. This file contains information related to the instance such as nameservers, d o mai n, and other options. 10. To prevent network issues, create /etc/ud ev/rul es. d /75-persi stent-netg enerato r. rul es file. # echo "#" > /etc/udev/rules.d/75-persistent-netgenerator.rules

This prevents /etc/ud ev/rul es. d /70 -persi stent-net. rul es file from being created. If /etc/ud ev/rul es. d /70 -persi stent-net. rul es is created, networking may not function properly when booting from snapshots (the network interface is created as " eth1" rather than " eth0" and IP address is not assigned). 11. Add the following line to /etc/sysco nfi g /netwo rk to avoid problems accessing the EC2 metadata service. NOZEROCONF=yes

12. To ensure the console messages appear in the Lo g tab on the dashboard and the no va co nso l e-l o g output, add the following boot option to the /etc/g rub. co nf: console=tty0 console=ttyS0,115200n8

13. Un-register the virtual machine so that the resulting image does not contain the same subscription details for every instance cloned based on it. # subscription-manager repos --disable=* # subscription-manager unregister # yum clean all

14. Power off the instance: # poweroff

15. Reset and clean the image using the vi rt-sysprep command so it can be to create instances without issues:

64

CHAPT ER 4 . IMAG ES AND ST O RAG E

[[email protected] host]# virt-sysprep -d rhel6.6

16. Reduce image size using the vi rt-sparsi fy command. This command converts any free space within the disk image back to free space within the host: [[email protected] host]# virt-sparsify - -compress rhel6.qcow2 rhel6cloud.qcow2

This creates a new rhel 6 -cl o ud . q co w2 file in the location from where the command is run. Note You will need to manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance.

The rhel 6 -cl o ud . q co w2 image file is ready to be uploaded to the Image service. For more information on uploading this image to your OpenStack deployment using the dashboard, see Section 4.1.2, “ Upload an Image” .

4 .1.2. Upload an Image 1. In the dashboard, select P ro ject > C o mpute > Imag es. 2. Click C reate Imag e. 3. Fill out the values, and click C reate Imag e when finished.

Field

Not es

Name

Name fo r the imag e. The name mus t b e uniq ue within the p ro jec t.

Des c rip tio n

Brief d es c rip tio n to id entify the imag e.

Imag e So urc e Imag e s o urc e: Imag e Lo cati o n o r Imag e Fi l e . Bas ed o n yo ur s elec tio n, the next field is d is p layed . Imag e Lo c atio n o r Imag e File

Selec t Imag e Lo cati o n o p tio n to s p ec ify the imag e lo c atio n URL. Selec t Imag e Fi l e o p tio n to up lo ad an imag e fro m the lo c al d is k.

65

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Field

Not es

Fo rmat

Imag e fo rmat (fo r examp le, q c o w2).

Arc hitec ture

Imag e arc hitec ture. Fo r examp le, us e i6 8 6 fo r a 32-b it arc hitec ture o r x8 6 _6 4 fo r a 6 4-b it arc hitec ture.

Minimum Dis k (G B)

Minimum d is k s iz e req uired to b o o t the imag e. If this field is no t s p ec ified , the d efault value is 0 (no minimum).

Minimum RAM (MB)

Minimum memo ry s iz e req uired to b o o t the imag e. If this field is no t s p ec ified , the d efault value is 0 (no minimum).

Pub lic

If s elec ted , makes the imag e p ub lic to all us ers with ac c es s to the p ro jec t.

Pro tec ted

If s elec ted , ens ures o nly us ers with s p ec ific p ermis s io ns c an d elete this imag e.

Note You can also use the g l ance i mag e-create command with the pro perty option to create an image. More values are available on the commmand line. For a complete listing, see Appendix A, Image Configuration Parameters.

4 .1.3. Updat e an Image 1. In the dashboard, select P ro ject > C o mpute > Imag es. 2. Click Ed i t. Note The Ed i t option is available only when you login as an ad mi n user. When you login as a d emo user, you have the option to Launch an instance or C reate Vo l ume.

3. Update the fields and click Upd ate Imag e when finished. You can update the following values - name, description, kernel ID , ramdisk ID , architecture, format, minimum disk, minimum RAM, public, protected. 4. Click the dropdown menu and select Upd ate Metad ata option. 5. Specify metadata by adding items from the left column to the right one. In the left column, there are metadata definitions from the Image Service Metadata Catalog. Select O ther to add metadata with the key of your choice and click Save when finished.

66

CHAPT ER 4 . IMAG ES AND ST O RAG E

Note You can also use the g l ance i mag e-upd ate command with the pro perty option to update an image. More values are available on the commmand line; for a complete listing, see Appendix A, Image Configuration Parameters.

4 .1.4 . Delet e an Image 1. In the dashboard, select P ro ject > C o mpute > Imag es. 2. Select the image you want to delete and click D el ete Imag es.

4 .2. MANAGE VOLUMES A volume is a block storage device that provides persistent storage to OpenStack instances.

4 .2.1. Basic Volume Usage and Configurat ion The following procedures describe how to perform basic end-user volume management. These procedures do not require administrative privileges.

4 .2 .1 .1 . Cre at e a Vo lum e 1. In the dashboard, select P ro ject > C o mpute > Vo l umes. 2. Click C reat e Vo lu me, and edit the following fields:

Field

Descript ion

Vo lume name

Name o f the vo lume.

Des c rip tio n

O p tio nal, s ho rt d es c rip tio n o f the vo lume.

Typ e O p tio nal vo lume typ e (s ee Sec tio n 4.2.4, “ G ro up Vo lume Setting s with Vo lume Typ es ” ). If yo u have multip le Blo c k Sto rag e b ac k end s , yo u c an us e this to s elec t a s p ec ific b ac k end . See Sec tio n 4.2.1.2, “ Sp ec ify Bac k End fo r Vo lume Creatio n” fo r d etails . Siz e (G B)

Vo lume s iz e (in g ig ab ytes ).

67

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Field Availab ility Zo ne

Descript ion

Availab ility z o nes (lo g ic al s erver g ro up s ), alo ng with ho s t ag g reg ates , are a c o mmo n metho d fo r s eg reg ating res o urc es within O p enStac k. Availab ility z o nes are d efined d uring ins tallatio n. Fo r mo re info rmatio n o n availab ility z o nes and ho s t ag g reg ates , s ee Sec tio n 3.4, “ Manag e Ho s t Ag g reg ates ” .

3. Specify a Vo l ume So urce:

Source

Descript ion

No s o urc e, emp ty vo lume

The vo lume will b e emp ty, and will no t c o ntain a file s ys tem o r p artitio n tab le.

Snap s ho t Us e an exis ting s nap s ho t as a vo lume s o urc e. If yo u s elec t this o p tio n, a new Use snapsho t as a so urce lis t ap p ears ; yo u c an then c ho o s e a s nap s ho t fro m the lis t. Fo r mo re info rmatio n ab o ut vo lume s nap s ho ts , refer to Sec tio n 4.2.1.8 , “ Create, Clo ne, o r Delete Vo lume Snap s ho ts ” . Imag e

Us e an exis ting imag e as a vo lume s o urc e. If yo u s elec t this o p tio n, a new Use i mag e as a so urce lis ts ap p ears ; yo u c an then c ho o s e an imag e fro m the lis t.

Vo lume

Us e an exis ting vo lume as a vo lume s o urc e. If yo u s elec t this o p tio n, a new Use vo l ume as a so urce lis t ap p ears ; yo u c an then c ho o s e a vo lume fro m the lis t.

4. Click C reate Vo l ume. After the volume is created, its name appears in the Vo l umes table.

4 .2 .1 .2 . Spe cify Back End fo r Vo lum e Cre at io n You can configure the Block Storage service to use multiple back ends. For example, Configure OpenStack to Use an NFS Back End provides step-by-step instructions on how to configure the Block Storage service to use an NFS share alongside the default back end. Whenever multiple Block Storage back ends are configured, you will also need to create a volume type for each back end. You can then use the type to specify which back end should be used for a created volume. For more information about volume types, see Section 4.2.4, “ Group Volume Settings with Volume Types” . To specify a back end when creating a volume, select its corresponding volume type from the T ype drop-down list (see Section 4.2.1.1, “ Create a Volume” ).

68

CHAPT ER 4 . IMAG ES AND ST O RAG E

If you do not specify a back end during volume creation, the Block Storage service will automatically choose one for you. By default, the service will choose the back end with the most available free space. You can also configure the Block Storage service to choose randomly among all available back ends instead; for more information, see Section 4.2.7, “ Configure How Volumes are Allocated to Multiple Back Ends” .

4 .2 .1 .3. Edit a Vo lum e 's Nam e o r De script io n 1. In the dashboard, select P ro ject > C o mpute > Vo l umes. 2. Select the volume's Ed i t Vo l ume button. 3. Edit the volume name or description as required. 4. Click Ed i t Vo l ume to save your changes. Note To create an encrypted volume, you must first have a volume type configured specifically for volume encryption. In addition, both Compute and Block Storage services must be configured to use the same static key. For information on how to set up the requirements for volume encryption, refer to Section 4.2.6, “ Encrypt Volumes with Static Keys” .

4 .2 .1 .4 . De le t e a Vo lum e 1. In the dashboard, select P ro ject > C o mpute > Vo l umes. 2. In the Vo lu mes table, select the volume to delete. 3. Click D elet e Vo lu mes. Note A volume cannot be deleted if it has existing snapshots. For instructions on how to delete snapshots, see Section 4.2.1.8, “ Create, Clone, or D elete Volume Snapshots” .

4 .2 .1 .5 . At t ach and De t ach a Vo lum e t o an Inst ance Instances can use a volume for persistent storage. A volume can only be attached to one instance at a time. For more information on instances, see Section 3.1, “ Manage Instances” . Pro ced u re 4 .1. At t ach a Vo lu me f ro m an In st an ce 1. In the dashboard, select P ro ject > C o mpute > Vo l umes.

69

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

2. Select the volume's Ed i t Attachments action. If the volume is not attached to an instance, the Attach T o Instance drop-down list is visible. 3. From the Attach T o Instance list, select the instance to which you wish to attach the volume. 4. Click Attach Vo l ume. Pro ced u re 4 .2. D et ach a Vo lu me Fro m an In st an ce 1. In the dashboard, select P ro ject > C o mpute > Vo l umes. 2. Select the volume's Ed i t Attachments action. If the volume is attached to an instance, the instance's name is displayed in the Attachments table. 3. Click D etach Vo l ume in this and the next dialog screen.

4 .2 .1 .6 . Se t a Vo lum e t o Re ad-Only You can give multiple users shared access to a single volume without allowing them to edit its contents. To do so, set the volume to read -o nl y using the following command: # cinder readonly-mode-update VOLUME true

Replace VOLUME with the ID of the target volume. To set a read-only volume back to read-write, run: # cinder readonly-mode-update VOLUME false

4 .2 .1 .7 . Change a Vo lum e 's Owne r To change a volume's owner, you will have to perform a volume transfer. A volume transfer is initiated by the volume's owner, and the volume's change in ownership is complete after the transfer is accepted by the volume's new owner. 1. From the command line, log in as the volume's current owner. 2. List the available volumes: # cinder list

3. Initiate the volume transfer: # cinder transfer-create VOLUME

70

CHAPT ER 4 . IMAG ES AND ST O RAG E

Where VOLUME is the name or ID of the volume you wish to transfer.

Examp le 4 .1.

# cinder transfer-create samplevolume +------------+--------------------------------------+ | Property | Value | +------------+--------------------------------------+ | auth_key | f03bf51ce7ead189 | | created_at | 2014-12-08T03:46:31.884066 | | id | 3f5dc551-c675-4205-a13a-d30f88527490 | | name | None | | volume_id | bcf7d015-4843-464c-880d-7376851ca728 | +------------+--------------------------------------+

The ci nd er transfer-create command clears the ownership of the volume and creates an i d and auth_key for the transfer. These values can be given to, and used by, another user to accept the transfer and become the new owner of the volume. 4. The new user can now claim ownership of the volume. To do so, the user should first log in from the command line and run: # cinder transfer-accept TRANSFERID TRANSFERKEY

Where TRANSFERID and TRANSFERKEY are the i d and auth_key values returned by the ci nd er transfer-create command, respectively.

Examp le 4 .2.

# cinder transfer-accept 3f5dc551-c675-4205-a13ad30f88527490 f03bf51ce7ead189

Note You can view all available volume transfers using: # cinder transfer-list

4 .2 .1 .8 . Cre at e , Clo ne , o r De le t e Vo lum e Snapsho t s You can preserve a volume's state at a specific point in time by creating a volume snapshot. You can then use the snapshot to clone new volumes.

71

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Warn in g Creating a snapshot of a volume that is attached to an instance may corrupt the snapshot. For instructions on how to detach a volume from an instance, see Procedure 4.2, “ D etach a Volume From an Instance” .

Note Volume backups are different from snapshots. Backups preserve the data contained in the volume, whereas snapshots preserve the state of a volume at a specific point in time. In addition, you cannot delete a volume if it has existing snapshots. Volume backups are used to prevent data loss, whereas snapshots are used to facilitate cloning. For more information about volume backups, refer to Section 4.2.2, “ Back Up and Restore a Volume” .

To create a volume snapshot: 1. In the dashboard, select P ro ject > C o mpute > Vo l umes. 2. Select the target volume's C reate Snapsho t action. 3. Provide a Snapsho t Name for the snapshot and click C reate a Vo l ume Snapsho t. The Vo l ume Snapsho ts tab displays all snapshots. You can clone new volumes from a snapshot once it appears in the Vo l ume Snapsho ts table. To do so, select the snapshot's C reate Vo l ume action. For more information about volume creation, see Section 4.2.1.1, “ Create a Volume” . To delete a snapshot, select its D el ete Vo l ume Snapsho t action. If your OpenStack deployment uses a Red Hat Ceph back end, see Section 4.2.1.8.1, “ Protected and Unprotected Snapshots in a Red Hat Ceph Back End” for more information on snapshot security and troubleshooting. 4 .2.1.8.1. Pro t ect ed an d U n p ro t ect ed Sn ap sh o t s in a R ed H at C ep h B ack En d When using Red Hat Ceph as a back end for your OpenStack deployment, you can set a snapshot to protected in the back end. Attempting to delete protected snapshots through OpenStack (as in, through the dashboard or the ci nd er snapsho t-d el ete command) will fail. When this occurs, set the snapshot to unprotected in the Red Hat Ceph back end first. Afterwards, you should be able to delete the snapshot through OpenStack as normal.

72

CHAPT ER 4 . IMAG ES AND ST O RAG E

For related instructions, see Protecting a Snapshot and Unprotecting a Snapshot.

4 .2 .1 .9 . Uplo ad a Vo lum e t o t he Im age Se rvice You can upload an existing volume as an image to the Image service directly. To do so: 1. In the dashboard, select P ro ject > C o mpute > Vo l umes. 2. Select the target volume's Upl o ad to Imag e action. 3. Provide an Imag e Name for the volume and select a D i sk Fo rmat from the list. 4. Click Upl o ad . The QEMU disk image utility uploads a new image of the chosen format using the name you provided. To view the uploaded image, select P ro ject > C o mpute > Imag es. The new image appears in the Imag es table. For information on how to use and configure images, see Section 4.1, “ Manage Images” .

4 .2.2. Back Up and Rest ore a Volume A volume backup is a full, persistent copy of a volume's contents. Volume backups are typically created as object stores, and therefore are managed through the Object Storage service. When creating a volume backup, all of the backup's metadata is stored in the Block Storage service's database. The ci nd er utility uses this metadata when restoring a volume from the backup. As such, when recovering from a catastrophic database loss, you must restore the Block Storage service's database first before restoring any volumes from backups. This also presumes that the Block Storage service database is being restored with all the original volume backup metadata intact. If you wish to configure only a subset of volume backups to survive a catastrophic database loss, you can also export the backup's metadata. In doing so, you can then re-import the metadata to the Block Storage database later on, and restore the volume backup as normal. Note Volume backups are different from snapshots. Backups preserve the data contained in the volume, whereas snapshots preserve the state of a volume at a specific point in time. In addition, you cannot delete a volume if it has existing snapshots. Volume backups are used to prevent data loss, whereas snapshots are used to facilitate cloning. For more information about volume snapshots, refer to Section 4.2.1.8, “ Create, Clone, or D elete Volume Snapshots” .

73

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

4 .2 .2 .1 . Cre at e a Vo lum e Backup 1. As a user with administrative privileges, view the ID or D i spl ay Name of the volume you wish to back-up: # cinder list

2. Back up the volume: # cinder backup-create VOLUME

Where VOLUME is the ID or D i spl ay Name of the volume you wish to back-up.

Examp le 4 .3.

# cinder backup-create volumename +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | e9d15fc7-eeae-4ca4-aa72-d52536dc551d | | name | None | | volume_id | 5f75430a-abff-4cc7-b74e-f808234fa6c5 | +-----------+--------------------------------------+

Note that the vo l ume_i d of the resulting backup is identical to the ID of vo l umename. 3. Verify that the volume backup creation is complete: # cinder backup-list

The volume backup creation is complete when the Status of the backup entry is avai l abl e. At this point, you can also export and store the volume backup's metadata. This allows you to restore the volume backup, even if the Block Storage database suffers a catastrophic loss. To do so, run: # cinder --os-volume-api-version 2 backup-export BACKUPID

Where BACKUPID is the ID or name of the volume backup.

Examp le 4 .4 .

74

CHAPT ER 4 . IMAG ES AND ST O RAG E

# cinder --os-volume-api-version 2 backup-export e9d15fc7-eeae4ca4-aa72-d52536dc551d +----------------+-----------------------------------------------------------------+ | Property | Value | +----------------+-----------------------------------------------------------------+ | backup_service | cinder.backup.drivers.swift | | backup_url | eyJzdGF0dXMi... | | | ...c2l6ZSI6IDF9 | +----------------+-----------------------------------------------------------------+

The volume backup metadata consists of the backup_servi ce and backup_url values.

4 .2 .2 .2 . Re st o re a Vo lum e Aft e r a Blo ck St o rage Dat abase Lo ss Typically, a Block Storage database loss prevents you from restoring a volume backup. This is because the Block Storage database contains metadata required by the volume backup. This metadata consists of backup_servi ce and backup_url values, which you can export after creating the volume backup (as shown in Section 4.2.2.1, “ Create a Volume Backup” ). If you exported and stored this metadata, then you can import it to a new Block Storage database (thereby allowing you to restore the volume backup). 1. As a user with administrative privileges, run: # cinder --os-volume-api-version 2 backup-import backup_service backup_url

Where backup_service and backup_url are from the metadata you exported.

Examp le 4 .5. Using the exported sample metadata from Section 4.2.2.1, “ Create a Volume Backup” : # cinder --os-volume-api-version 2 backup-import cinder.backup.drivers.swift eyJzdGF0dXMi...c2l6ZSI6IDF9 +----------+--------------------------------------+ | Property | Value |

75

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

+----------+--------------------------------------+ | id | 77951e2f-4aff-4365-8c64-f833802eaa43 | | name | None | +----------+--------------------------------------+

2. After the metadata is imported into the Block Storage service database, you can restore the volume as normal (see Section 4.2.2.3, “ Restore a Volume from a Backup” ).

4 .2 .2 .3. Re st o re a Vo lum e fro m a Backup 1. As a user with administrative privileges, find the ID of the volume backup you wish to use: # cinder backup-list

The Vo l ume ID should match the ID of the volume you wish to restore. 2. Restore the volume backup: # cinder backup-restore BACKUP_ID

Where BACKUP_ID is the ID of the volume backup you wish to use. 3. If you no longer need the backup, delete it: # cinder backup-delete BACKUP_ID

4 .2.3. Migrat e a Volume Only an administrator can migrate volumes; volumes to be migrated cannot be in use nor can they have any snapshots. 1. As an administrative user, list all available volumes: # cinder list

2. List the available back ends (hosts) and their respective availability zones: # cinder-manage host list

3. Initiate the migration: # cinder migrate VOLUME BACKEND

76

CHAPT ER 4 . IMAG ES AND ST O RAG E

Where: VOLUME is the ID of the volume to be migrated. BACKEND is the back end to where the volume should be migrated. 4. View the current status of the volume to be migrated: # cinder show VOLUME

Examp le 4 .6 .

# cinder show 45a85c3c-3715-484d-ab5d-745da0e0bd5a +---------------------------------------+-------------------------------------+ | Property | Value | +---------------------------------------+-------------------------------------+ | ... | ... | | os-vol-host-attr:host | server1 | | os-vol-mig-status-attr:migstat | None | | ... | ... | +---------------------------------------+-------------------------------------+

D uring migration, note the following attributes: o s- vo l- h o st - at t r:h o st The volume's current back end. Once the migration completes, this displays the target back end (namely, BACKEND). o s- vo l- mig - st at u s- at t r:mig st at The status of the migration. A status of None means a migration is no longer in progress.

4 .2.4 . Group Volume Set t ings wit h Volume T ypes OpenStack allows you to create volume types, which allows you apply the type's associated settings when creating a volume (Section 4.2.1.1, “ Create a Volume” ). For example, you can associate:

77

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Whether or not a volume is encrypted (Section 4.2.6.2, “ Configure Volume Type Encryption” ) Which back end a volume should use (Section 4.2.1.2, “ Specify Back End for Volume Creation” ) Quality-of-Service (QoS) Specs Settings are associated with volume types using key-value pairs called Extra Specs. When you specify a volume type during volume creation, the Block Storage scheduler applies these key/value pairs as settings. You can associate multiple key/value pairs to the same volume type.

Examp le 4 .7. Volume types provide the capability to provide different users with storage tiers. By associating specific performance, resilience, and other settings as key/value pairs to a volume type, you can map tier-specific settings to different volume types. You can then apply tier settings when creating a volume by specifying the corresponding volume type.

Note Available and supported Extra Specs vary per volume driver. Consult your volume driver's documentation for a list of valid Extra Specs.

4 .2 .4 .1 . Cre at e and Co nfigure a Vo lum e T ype 1. As an admin user in the dashboard, select Ad mi n > Vo l umes > Vo l ume T ypes. 2. Click C reate Vo l ume T ype. 3. Enter the volume type name in the Name field. 4. Click C reate Vo l ume T ype. The new type appears in the Vo l ume T ypes table. 5. Select the volume type's Vi ew Extra Specs action. 6. Click C reate, and specify the Key and Val ue. The key/value pair must be valid; otherwise, specifying the volume type during volume creation will result in an error. 7. Click C reate. The associated setting (key/value pair) now appears in the Extra Specs table.

78

CHAPT ER 4 . IMAG ES AND ST O RAG E

Note You can also associate a QOS Spec to the volume type. For details, refer to Section 4.2.5.2, “ Associate a QOS Spec with a Volume Type” .

4 .2 .4 .2 . Edit a Vo lum e T ype 1. As an admin user in the dashboard, select Ad mi n > Vo l umes > Vo l ume T ypes. 2. In the Vo l ume T ypes table, select the volume type's Vi ew Extra Specs action. 3. On the Extra Specs table of this page, you can: Add a new setting to the volume type. To do this, click C reate, and specify the key/value pair of the new setting you wish to associate to the volume type. Edit an existing setting associated with the volume type. To do this, select the settings Ed i t action. D elete existing settings associated with the volume type. To do this, select the extra specs' check box and click D el ete Extra Specs in this and the next dialog screen.

4 .2 .4 .3. De le t e a Vo lum e T ype To delete a volume type, select its corresponding checkboxes from the Vo l ume T ypes table and click D el ete Vo l ume T ypes.

4 .2.5. Use Qualit y-of-Service Specificat ions You can map multiple performance settings to a single Quality-of-Service specification (QOS Specs). D oing so allows you to provide performance tiers for different user types. Performance settings are mapped as key/value pairs to QOS Specs, similar to the way volume settings are associated to a volume type. However, QOS Specs are different from volume types in the following respects: QOS Specs are used to apply performance settings, which include limiting read/write operations to disks. Available and supported performance settings vary per storage driver. To determine which QOS Specs are supported by your back end, consult the documentation of your back end device's volume driver.

79

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Volume types are directly applied to volumes, whereas QOS Specs are not. Rather, QOS Specs are associated to volume types. D uring volume creation, specifying a volume type also applies the performance settings mapped to the volume type's associated QOS Specs.

4 .2 .5 .1 . Cre at e and Co nfigure a QOS Spe c As an administrator, you can create and configure a QOS Spec through the Q O S Specs table. You can associate more than one key/value pair to the same QOS Spec. 1. As an admin user in the dashboard, select Ad mi n > Vo l umes > Vo l ume T ypes. 2. On the Q O S Specs table, click C reate Q O S Spec. 3. Enter a name for the QOS Spec. 4. In the C o nsumer field, specify where the QOS policy should be enforced: T ab le 4 .1. C o n su mer T yp es

T ype

Descript ion

b ac k-end

Q O S p o lic y will b e ap p lied to the Blo c k Sto rag e b ac k end .

fro nt-end

Q O S p o lic y will b e ap p lied to Co mp ute.

b o th

Q O S p o lic y will b e ap p lied to b o th Blo c k Sto rag e and Co mp ute.

5. Click C reate. The new QOS Spec should now appear in the Q O S Specs table. 6. In the QOS Specs table, select the new spec's Manag e Specs action. 7. Click C reate, and specify the Key and Val ue. The key/value pair must be valid; otherwise, specifying a volume type associated with this QOS Spec during volume creation will fail. 8. Click C reate. The associated setting (key/value pair) now appears in the KeyVal ue P ai rs table.

4 .2 .5 .2 . Asso ciat e a QOS Spe c wit h a Vo lum e T ype As an administrator, you can associate a QOS Spec to an existing volume type using the Vo l ume T ypes table.

80

CHAPT ER 4 . IMAG ES AND ST O RAG E

1. As an administrator in the dashboard, select Ad mi n > Vo l umes > Vo l ume T ypes. 2. In the Vo l ume T ypes table, select the type's Manag e Q O S Spec Asso ci ati o n action. 3. Select a QOS Spec from the Q O S Spec to be asso ci ated list. 4. Click Asso ci ate. The selected QOS Spec now appears in the Asso ci ated Q O S Spec column of the edited volume type.

4 .2 .5 .3. Disasso ciat e a QOS Spe c fro m a Vo lum e T ype 1. As an administrator in the dashboard, select Ad mi n > Vo l umes > Vo l ume T ypes. 2. In the Vo l ume T ypes table, select the type's Manag e Q O S Spec Asso ci ati o n action. 3. Select 'None' from the Q O S Spec to be asso ci ated list. 4. Click Asso ci ate. The selected QOS Spec is no longer in the Asso ci ated Q O S Spec column of the edited volume type.

4 .2.6. Encrypt Volumes wit h St at ic Keys Volume encryption helps provide basic data protection in case the volume back-end is either compromised or outright stolen. The contents of an encrypted volume can only be read with the use of a specific key; both Compute and Block Storage services must be configured to use the same key in order for instances to use encrypted volumes. This section describes how to configure an OpenStack deployment to use a single key for encrypting volumes.

4 .2 .6 .1 . Co nfigure a St at ic Ke y The first step in implementing basic volume encryption is to set a static key. This key must be a hex string, which will be used by the Block Storage volume service (namely, o penstackci nd er-vo l ume) and all Compute services (o penstack-no va-co mpute). To configure both services to use this key, set the key as the fi xed _key value in the [keymg r] section of both service's respective configuration files. 1. From the command line, log in as ro o t to the node hosting o penstack-ci nd ervo l ume. 2. Set the static key: # openstack-config --set /etc/cinder/cinder.conf keymgr fixed_key HEX_KEY

81

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Replace HEX_KEY with a 16-digit alphanumeric hex key (for example, 0000000000000000000000000000000000000000000000000000000 0 0 0 0 0 0 0 0 0 ). 3. Restart the Block Storage volume service: # openstack-service restart cinder-volume

4. Log in to the node hosting o penstack-no va-co mpute, and set the same static key: # openstack-config --set /etc/nova/nova.conf keymgr fixed_key HEX_KEY Note If you have multiple Compute nodes (multiple nodes hosting o penstackno va-co mpute), then you need to set the same static key in /etc/no va/no va. co nf of each node.

5. Restart the Compute service: # openstack-service restart nova-compute Note Likewise, if you set the static key on multiple Compute nodes, you need to restart the o penstack-no va-co mpute service on each node as well.

At this point, both Compute and Block Storage volume services can now use the same static key to encrypt/decrypt volumes. That is, new instances will be able to use volumes encrypted with the static key (HEX_KEY).

4 .2 .6 .2 . Co nfigure Vo lum e T ype Encrypt io n To create volumes encrypted with the static key from Section 4.2.6.1, “ Configure a Static Key” , you need an encrypted volume type. Configuring a volume type as encrypted involves setting what provider class, cipher, and key size it should use. To do so, run: # cinder encryption-type-create --cipher aes-xts-plain64 --key_size BITSIZE --control_location front-end VOLTYPE nova.volume.encryptors.luks.LuksEncryptor

Where: BITSIZE is the key size (for example, 512 for a 512-bit key).

82

CHAPT ER 4 . IMAG ES AND ST O RAG E

VOLTYPE is the name of the volume type you want to encrypt. This command sets the no va. vo l ume. encrypto rs. l uks. LuksEncrypto r provider class and aes-xts-pl ai n6 4 cipher. As of this release, this is the only supported class/cipher configuration for volume encryption. Once you have an encrypted volume type, you can invoke it to automatically create encrypted volumes. Specifically, select the encrypted volume type from the T yp e drop-down list in the C reat e Vo lu me window (see to Section 4.2.1, “ Basic Volume Usage and Configuration” ).

4 .2.7. Configure How Volumes are Allocat ed t o Mult iple Back Ends If the Block Storage service is configured to use multiple back ends, you can use configured volume types to specify where a volume should be created. For details, see Section 4.2.1.2, “ Specify Back End for Volume Creation” . The Block Storage service will automatically choose a back end if you do not specify one during volume creation. Block Storage sets the first defined back end as a default; this back end will be used until it runs out of space. At that point, Block Storage will set the second defined back end as a default, and so on. If this is not suitable for your needs, you can use the filter scheduler to control how Block Storage should select back ends. This scheduler can use different filters to triage suitable back ends, such as: Availab ilit yZ o n eFilt er Filters out all back ends that do not meet the availability zone requirements of the requested volume C ap acit yFilt er Selects only back ends with enough space to accommodate the volume C ap ab ilit iesFilt er Selects only back ends that can support any specified settings in the volume To configure the filter scheduler: 1. Enable the Fi l terSched ul er driver. # openstack-config --set /etc/cinder/cinder.conf DEFAULT scheduler_driver cinder.scheduler.filter_scheduler.FilterScheduler

2. Set which filters should be active:

83

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

# openstack-config --set /etc/cinder/cinder.conf DEFAULT scheduler_default_filters AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter

3. Configure how the scheduler should select a suitable back end. If you want the scheduler: To always choose the back end with the most available free space, run: # openstack-config --set /etc/cinder/cinder.conf DEFAULT scheduler_default_weighers AllocatedCapacityWeigher # openstack-config --set /etc/cinder/cinder.conf DEFAULT allocated_capacity_weight_multiplier -1.0

To choose randomly among all suitable back ends, run: # openstack-config --set /etc/cinder/cinder.conf DEFAULT scheduler_default_weighers ChanceWeigher

4. Restart the Block Storage scheduler to apply your settings: # openstack-service restart openstack-cinder-scheduler

4 .3. MANAGE CONT AINERS OpenStack Object Storage (swift) stores its objects (data) in containers, which are similar to directories in a file system although they cannot be nested. Containers provide an easy way for users to store any kind of unstructured data; for example, objects might include photos, text files, or images. Stored objects are not encrypted nor are they compressed. To help with organization, pseudo-folders are logical devices that can contain objects (and can be nested). For example, you might create an 'Images' folder in which to store pictures and a 'Media' folder in which to store videos. You can create one or more containers in each project, and one or more objects or pseudofolders in each container.

4 .3.1. Creat e a Cont ainer 1. In the dashboard, select P ro ject > O bject Sto re > C o ntai ners. 2. Click C reate C o ntai ner. 3. Specify the C o ntai ner Name, and select one of the following in the C o ntai ner Access field.

84

CHAPT ER 4 . IMAG ES AND ST O RAG E

T ype

Descript ion

Privat e

Limits ac c es s to a us er in the c urrent p ro jec t.

Pub li c

Permits API ac c es s to anyo ne with the p ub lic URL. Ho wever, in the d as hb o ard , p ro jec t us ers c anno t s ee p ub lic c o ntainers and d ata fro m o ther p ro jec ts .

4. Click C reate C o ntai ner.

4 .3.2. Creat e Pseudo Folder for Cont ainer 1. In the dashboard, select P ro ject > O bject Sto re > C o ntai ners. 2. Click the name of the container to which you want to add the pseudo-folder. 3. Click C reate P seud o -fo l d er. 4. Specify the name in the P seud o -fo l d er Name field, and click C reate.

4 .3.3. Upload an Object If you do not upload an actual file, the object is still created (as placeholder) and can later be used to upload the file. 1. In the dashboard, select P ro ject > O bject Sto re > C o ntai ners. 2. Click the name of the container in which the uploaded object will be placed; if a pseudo-folder already exists in the container, you can click its name. 3. Browse for your file, and click Upl o ad O bject. 4. Specify a name in the O bject Name field: Pseudo-folders can be specified in the name using a '/' character (for example, 'Images/myImage.jpg'). If the specified folder does not already exist, it is created when the object is uploaded. A name that is not unique to the location (that is, the object already exists) overwrites the object's contents. 5. Click Upl o ad O bject.

4 .3.4 . Copy an Object

85

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

1. In the dashboard, select P ro ject > O bject Sto re > C o ntai ners. 2. Click the name of the object's container or folder (to display the object). 3. Click Upl o ad O bject. 4. Browse for the file to be copied, and select C o py in its arrow menu. 5. Specify the following:

Field

Descript ion

Des tinatio n c o ntainer

Targ et c o ntainer fo r the new o b jec t.

Path

Ps eud o -fo ld er in the d es tinatio n c o ntainer; if the fo ld er d o es no t alread y exis t, it is c reated .

Des tinatio n o b jec t name

New o b jec t' s name. If yo u us e a name that is no t uniq ue to the lo c atio n (that is , the o b jec t alread y exis ts ), it o verwrites the o b jec t' s p revio us c o ntents .

6. Click C o py O bject.

4 .3.5. Delet e an Object 1. In the dashboard, select P ro ject > O bject Sto re > C o ntai ners. 2. Browse for the object, and select D el ete O bject in its arrow menu. 3. Click D el ete O bject to confirm the object's removal.

4 .3.6. Delet e a Cont ainer 1. In the dashboard, select P ro ject > O bject Sto re > C o ntai ners. 2. Browse for the container in the C o ntai ners section, and ensure all objects have been deleted (see Section 4.3.5, “ D elete an Object” ). 3. Select D el ete C o ntai ner in the container's arrow menu. 4. Click D el ete C o ntai ner to confirm the container's removal.

86

CHAPT ER 5. NET WO RKING

CHAPTER 5. NETWORKING OpenStack Networking (neutron) is the software-defined networking component of RHEL OpenStack Platform. The virtual network infrastructure enables connectivity between instances and the physical external network.

5.1. MANAGE NET WORK RESOURCES Add and remove OpenStack Networking resources such as subnets and routers to suit your RHEL OpenStack Platform deployment.

5.1.1. Add a Net work Create a network to give your instances a place to communicate with each other and receive IP addresses using D HCP. A network can also be integrated with external networks in your RHEL OpenStack Platform deployment or elsewhere, such as the physical network. This integration allows your instances to communicate with, and be reachable by, outside systems. To integrate your network with your physical external network, see Section 5.3, “ Bridge the physical network” . When creating networks, it is important to know that networks can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and would prefer a measure of isolation between them. For example, you can designate that only webserver traffic is present on one subnet, while database traffic traverse another. Subnets are isolated from each other, and any instance that wishes to communicate with another subnet must have their traffic directed by a router. Consider placing systems that will require a high volume of traffic amongst themselves in the same subnet, so that they don't require routing, and avoid the subsequent latency and load.

5 .1 .1 .1 . Cre at e a Ne t wo rk 1. In the dashboard, select P ro ject > Netwo rk > Netwo rks. 2. Click + C reate Netwo rk. 3. Specify the following:

Field

Descript ion

87

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Field Netwo rk Name

Descript ion

Des c rip tive name, b as ed o n the ro le that the netwo rk will p erfo rm. If yo u are integ rating the netwo rk with an external VLAN, c o ns id er ap p end ing the VLAN ID numb er to the name. Examp les :

webservers_122 , if yo u are ho s ting HTTP web s ervers in this s ub net, and yo ur VLAN tag is 122.

i nternal -o nl y , if yo u intend to keep the netwo rk traffic p rivate, and no t integ rate it with an external netwo rk. Ad min State

Co ntro ls whether the netwo rk is immed iately availab le. This field allo ws yo u to c reate the netwo rk b ut s till keep it in a D o wn s tate, where it is lo g ic ally p res ent b ut s till inac tive. This is us eful if yo u d o no t intend to enter the netwo rk into p ro d uc tio n rig ht away.

4. Click the Next button, and specify the following in the Subnet tab:

Field

Descript ion

Create Sub net

Determines whether a s ub net is c reated . Fo r examp le, yo u mig ht no t want to c reate a s ub net if yo u intend to keep this netwo rk as a p lac eho ld er witho ut netwo rk c o nnec tivity.

Sub net Name

Des c rip tive name.

Netwo rk Ad d res s

Ad d res s in CIDR fo rmat, whic h c o ntains the IP ad d res s rang e and s ub net mas k in o ne value. To d etermine the ad d res s , c alc ulate the numb er o f b its mas ked in the s ub net mas k and ap p end that value to the IP ad d res s rang e. Fo r examp le, the s ub net mas k 255.255.255.0 has 24 mas ked b its . To us e this mas k with the IPv4 ad d res s rang e 19 2.16 8 .122.0 , s p ec ify the ad d res s 19 2. 16 8. 122. 0 /24 .

IP Vers io n

88

Internet p ro to c o l, where valid typ es are IP v4 o r IP v6 . The IP ad d res s rang e in the Netwo rk Ad d ress field mus t matc h whic hever vers io n yo u s elec t.

G ateway IP

IP ad d res s o f the ro uter interfac e fo r yo ur d efault g ateway. This ad d res s is the next ho p fo r ro uting any traffic d es tined fo r an external lo c atio n, and mus t b e within the rang e s p ec ified in the Netwo rk Ad d ress field . Fo r examp le, if yo ur CIDR netwo rk ad d res s is 19 2.16 8 .122.0 /24, then yo ur d efault g ateway is likely to b e 19 2.16 8 .122.1.

Dis ab le G ateway

Dis ab les fo rward ing and keep s the netwo rk is o lated .

CHAPT ER 5. NET WO RKING

5. Click Next to specify D HCP options:

Field

Descript ion

Enab le DHCP

Enab les DHCP s ervic es fo r this s ub net. DHCP allo ws yo u to auto mate the d is trib utio n o f IP s etting s to yo ur ins tanc es .

IPv6 Ad d res s Co nfig uratio n Mo d e

If c reating an IPv6 netwo rk, s p ec ifies ho w IPv6 ad d res s es and ad d itio nal info rmatio n are allo c ated :

No O pti o ns Speci fi ed - Selec t this o p tio n if IP ad d res s es are s et manually, o r a no n O p enStac k-aware metho d is us ed fo r ad d res s allo c atio n.

SLAAC (Statel ess Ad d ress Auto co nfi g urati o n) Ins tanc es g enerate IPv6 ad d res s es b as ed o n Ro uter Ad vertis ement (RA) mes s ag es s ent fro m the O p enStac k Netwo rking ro uter. This c o nfig uratio n res ults in an O p enStac k Netwo rking s ub net c reated with ra_mo d e s et to sl aac and ad d ress_mo d e s et to sl aac .

D HC P v6 stateful - Ins tanc es rec eive IPv6 ad d res s es as well as ad d itio nal o p tio ns (fo r examp le, DNS) fro m O p enStac k Netwo rking DHCPv6 s ervic e. This c o nfig uratio n res ults in a s ub net c reated with ra_mo d e s et to d hcpv6 -stateful and ad d ress_mo d e s et to d hcpv6 -stateful .

D HC P v6 statel ess - Ins tanc es g enerate IPv6 ad d res s es b as ed o n Ro uter Ad vertis ement (RA) mes s ag es s ent fro m the O p enStac k Netwo rking ro uter. Ad d itio nal o p tio ns (fo r examp le, DNS) are allo c ated fro m the O p enStac k Netwo rking DHCPv6 s ervic e. This c o nfig uratio n res ults in a s ub net c reated with ra_mo d e s et to d hcpv6 -statel ess and ad d ress_mo d e s et to d hcpv6 -statel ess . Allo c atio n Po o ls

DNS Name Servers

Rang e o f IP ad d res s es yo u wo uld like DHCP to as s ig n. Fo r examp le, the value 19 2. 16 8. 22. 10 0 ,19 2. 16 8. 22. 10 0 c o ns id ers all ' up ' ad d res s es in that rang e as availab le fo r allo c atio n.

IP ad d res s es o f the DNS s ervers availab le o n the netwo rk. DHCP d is trib utes thes e ad d res s es to the ins tanc es fo r name res o lutio n.

Ho s t Ro utes Static ho s t ro utes . Firs t s p ec ify the d es tinatio n netwo rk in CIDR fo rmat, fo llo wed b y the next ho p that s ho uld b e us ed fo r ro uting . Fo r examp le: 19 2.16 8 .23.0 /24, 10 .1.31.1 Pro vid e this value if yo u need to d is trib ute s tatic ro utes to ins tanc es .

6. Click C reate.

89

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

The completed network is available for viewing in the Netwo rks tab. You can also click Ed i t to change any options as needed. Now when you create instances, you can configure them now to use its subnet, and they will subsequently receive any specified D HCP options.

5 .1 .1 .2 . Cre at e an Advance d Ne t wo rk Advanced network options are available for administrators, when creating a network from the Ad mi n view. These options define the network type to use, and allow tenants to be specified: 1. In the dashboard, select Ad mi n > Netwo rks > C reate Netwo rk > P ro ject. 2. Select a destination project to host the new network using P ro ject. 3. Review the options in P ro vi d er Netwo rk T ype: Lo cal - Traffic remains on the local Compute host and is effectively isolated from any external networks. Fl at - Traffic remains on a single network and can also be shared with the host. No VLAN tagging or other network segregation takes place. VLAN - Create a network using a VLAN ID that corresponds to a VLAN present in the physical network. Allows instances to communicate with systems on the same layer 2 VLAN. G R E - Use a network overlay that spans multiple nodes for private communication between instances. Traffic egressing the overlay must be routed. VXLAN - Use a network overlay that spans multiple nodes for private communication between instances. Traffic egressing the overlay must be routed. 4. Click C reate Netwo rk, and review the Project's Netwo rk T o po l o g y to validate that the network has been successfully created.

5 .1 .1 .3. Add Ne t wo rk Ro ut ing To allow traffic to be routed to and from your new network, you must add its subnet as an interface to an existing virtual router: 1. In the dashboard, select P ro ject > Netwo rk > R o uters. 2. Click on your virtual router's name in the R o uters list, and click + Ad d Interface. 3. In the Subnet list, select the name of your new subnet. 4. You can optionally specify an IP ad d ress for the interface in this field.

90

CHAPT ER 5. NET WO RKING

5. Click Ad d Interface. Instances on your network are now able to communicate with systems outside the subnet.

5.1.2. Delet e a Net work There are occasions where it becomes necessary to delete a network that was previously created, perhaps as housekeeping or as part of a decommisioning process. In order to successfully delete a network, you must first remove or detach any interfaces where it is still in use. The following procedure provides the steps for deleting a network in your project, together with any dependent interfaces. 1. In the dashboard, select P ro ject > Netwo rk > Netwo rks. 2. Remove all router interfaces associated with the target network's subnets. To remove an interface: a. Find the ID number of the network you would like to delete by clicking on your target network in the Netwo rks list, and looking at the its ID field. All the network's associated subnets will share this value in their Netwo rk ID field. b. Select P ro ject > Netwo rk > R o uters, click on your virtual router's name in the R o uters list, and locate the interface attached to the subnet you would like to delete. You can distinguish it from the others by the IP address that would have served as the gateway IP. In addition, you can further validate the distinction by ensuring that the interface's network ID matches the ID you noted in the previous step. c. Click the interface's D el ete Interface button. 3. Select P ro ject > Netwo rk > Netwo rks, and click the name of your network. Click the target subnet's D el ete Subnet button. Note If you are still unable to remove the subnet at this point, ensure it is not already being used by any instances.

4. Select P ro ject > Netwo rk > Netwo rks, and select the network you would like to delete. 5. Click D el ete Netwo rks in this and the next dialog screen.

5.1.3. Creat e a Subnet

91

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Subnets are the means by which instances are granted network connectivity. Each instance is assigned to a subnet as part of the instance creation process, therefore it's important to consider proper placement of instances to best accomodate their connectivity requirements. Subnets are created in pre-existing networks. Remember that tenant networks in OpenStack Networking can host multiple subnets. This is useful if you intend to host distinctly different systems in the same network, and would prefer a measure of isolation between them. For example, you can designate that only webserver traffic is present on one subnet, while database traffic traverse another. Subnets are isolated from each other, and any instance that wishes to communicate with another subnet must have their traffic directed by a router. Consider placing systems that will require a high volume of traffic amongst themselves in the same subnet, so that they don't require routing, and avoid the subsequent latency and load. Pro ced u re 5.1. C reat e a n ew su b n et 1. In the dashboard, select P ro ject > Netwo rk > Netwo rks, and click your network's name in the Netwo rks view. 2. Click C reate Subnet, and specify the following.

Field

Descript ion

Sub net Name

Des c rip tive s ub net name.

Netwo rk Ad d res s

Ad d res s in CIDR fo rmat, whic h c o ntains the IP ad d res s rang e and s ub net mas k in o ne value. To d etermine the ad d res s , c alc ulate the numb er o f b its mas ked in the s ub net mas k and ap p end that value to the IP ad d res s rang e. Fo r examp le, the s ub net mas k 255.255.255.0 has 24 mas ked b its . To us e this mas k with the IPv4 ad d res s rang e 19 2.16 8 .122.0 , s p ec ify the ad d res s 19 2. 16 8. 122. 0 /24 .

IP Vers io n

Internet p ro to c o l, where valid typ es are IP v4 o r IP v6 . The IP ad d res s rang e in the Netwo rk Ad d ress field mus t matc h whic hever vers io n yo u s elec t.

G ateway IP

IP ad d res s o f the ro uter interfac e fo r yo ur d efault g ateway. This ad d res s is the next ho p fo r ro uting any traffic d es tined fo r an external lo c atio n, and mus t b e within the rang e s p ec ified in the Netwo rk Ad d ress field . Fo r examp le, if yo ur CIDR netwo rk ad d res s is 19 2.16 8 .122.0 /24, then yo ur d efault g ateway is likely to b e 19 2.16 8 .122.1.

Dis ab le G ateway

Dis ab les fo rward ing and keep s the netwo rk is o lated .

3. Click Next to specify D HCP options:

92

CHAPT ER 5. NET WO RKING

Field

Descript ion

Enab le DHCP

Enab les DHCP s ervic es fo r this s ub net. DHCP allo ws yo u to auto mate the d is trib utio n o f IP s etting s to yo ur ins tanc es .

IPv6 Ad d res s Co nfig uratio n Mo d e

If c reating an IPv6 netwo rk, s p ec ifies ho w IPv6 ad d res s es and ad d itio nal info rmatio n are allo c ated :

No O pti o ns Speci fi ed - Selec t this o p tio n if IP ad d res s es are s et manually, o r a no n O p enStac k-aware metho d is us ed fo r ad d res s allo c atio n.

SLAAC (Statel ess Ad d ress Auto co nfi g urati o n) Ins tanc es g enerate IPv6 ad d res s es b as ed o n Ro uter Ad vertis ement (RA) mes s ag es s ent fro m the O p enStac k Netwo rking ro uter. This c o nfig uratio n res ults in a O p enStac k Netwo rking s ub net c reated with ra_mo d e s et to sl aac and ad d ress_mo d e s et to sl aac .

D HC P v6 stateful - Ins tanc es rec eive IPv6 ad d res s es as well as ad d itio nal o p tio ns (fo r examp le, DNS) fro m O p enStac k Netwo rking DHCPv6 s ervic e. This c o nfig uratio n res ults in a s ub net c reated with ra_mo d e s et to d hcpv6 -stateful and ad d ress_mo d e s et to d hcpv6 -stateful .

D HC P v6 statel ess - Ins tanc es g enerate IPv6 ad d res s es b as ed o n Ro uter Ad vertis ement (RA) mes s ag es s ent fro m the O p enStac k Netwo rking ro uter. Ad d itio nal o p tio ns (fo r examp le, DNS) are allo c ated fro m the O p enStac k Netwo rking DHCPv6 s ervic e. This c o nfig uratio n res ults in a s ub net c reated with ra_mo d e s et to d hcpv6 -statel ess and ad d ress_mo d e s et to d hcpv6 -statel ess . Allo c atio n Po o ls

DNS Name Servers

Rang e o f IP ad d res s es yo u wo uld like DHCP to as s ig n. Fo r examp le, the value 19 2. 16 8. 22. 10 0 ,19 2. 16 8. 22. 10 0 c o ns id ers all ' up ' ad d res s es in that rang e as availab le fo r allo c atio n.

IP ad d res s es o f the DNS s ervers availab le o n the netwo rk. DHCP d is trib utes thes e ad d res s es to the ins tanc es fo r name res o lutio n.

Ho s t Ro utes Static ho s t ro utes . Firs t s p ec ify the d es tinatio n netwo rk in CIDR fo rmat, fo llo wed b y the next ho p that s ho uld b e us ed fo r ro uting . Fo r examp le: 19 2.16 8 .23.0 /24, 10 .1.31.1 Pro vid e this value if yo u need to d is trib ute s tatic ro utes to ins tanc es .

4. Click C reate. The new subnet is available for viewing in your network's Subnets list. You can also click Ed i t to change any options as needed. When you create instances, you can configure them now to use this subnet, and they will subsequently receive any specified D HCP options.

93

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

5.1.4 . Delet e a Subnet You can delete a subnet if it is no longer in use. However, if any instances are still configured to use the subnet, the deletion attempt fails and the dashboard displays an error message. This procedure demonstrates how to delete a specific subnet in a network: 1. In the dashboard, select P ro ject > Netwo rk > Netwo rks, and click the name of your network. 2. Select the target subnet and click D el ete Subnets.

5.1.5. Add a rout er OpenStack Networking provides routing services using an SD N-based virtual router. Routers are a requirement for your instances to communicate with external subnets, including those out in the physical network. Routers and subnets connect using i nterfaces, with each subnet requiring its own interface to the router. A router's default gateway defines the next hop for any traffic received by the router. Its network is typically configured to route traffic to the external physical network using a virtual bridge. 1. In the dashboard, select P ro ject > Netwo rk > R o uters, and click + C reate R o uter. 2. Enter a descriptive name for the new router, and click C reate ro uter. 3. Click Set G ateway next to the new router's entry in the R o uters list. 4. In the External Netwo rk list, specify the network that will receive traffic destined for an external location. 5. Click Set G ateway. After adding a router, the next step is to configure any subnets you have created to send traffic using this router. You do this by creating interfaces between the subnet and the router (see Procedure 5.0, “ ” ).

5.1.6. Delet e a rout er You can delete a router if it has no connected interfaces. This procedure describes the steps needed to first remove a router's interfaces, and then the router itself.

94

CHAPT ER 5. NET WO RKING

1. In the dashboard, select P ro ject > Netwo rk > R o uters, and click on the name of the router you would like to delete. 2. Select the interfaces of type Internal Interface. 3. Click D el ete Interfaces. 4. From the R o uters list, select the target router and click D el ete R o uters.

5.1.7. Add an int erface Interfaces allow you to interconnect routers with subnets. As a result, the router can direct any traffic that instances send to destinations outside of their intermediate subnet. This procedure adds a router interface and connects it to a subnet. The procedure uses the Network Topology feature, which displays a graphical representation of all your virtual router and networks and enables you to perform network management tasks. 1. In the dashboard, select P ro ject > Netwo rk > Netwo rk T o po l o g y. 2. Locate the router you wish to manage, hover your mouse over it, and click Ad d Interface. 3. Specify the Subnet to which you would like to connect the router. 4. You have the option of specifying an IP Ad d ress. The address is useful for testing and troubleshooting purposes, since a successful ping to this interface indicates that the traffic is routing as expected. 5. Click Ad d i nterface. The Netwo rk T o po l o g y diagram automatically updates to reflect the new interface connection between the router and subnet.

5.1.8. Delet e an int erface You can remove an interface to a subnet if you no longer require the router to direct its traffic. This procedure demonstrates the steps required for deleting an interface: 1. In the dashboard, select P ro ject > Netwo rk > R o uters. 2. Click on the name of the router that hosts the interface you would like to delete. 3. Select the interface (will be of type Internal Interface), and click D el ete Interfaces.

95

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

5.2. CONFIGURE IP ADDRESSING You can use procedures in this section to manage your IP address allocation in OpenStack Networking.

5.2.1. Creat e Float ing IP Pools Floating IP addresses allow you to direct ingress network traffic to your OpenStack instances. You begin by defining a pool of validly routable external IP addresses, which can then be dynamically assigned to an instance. OpenStack Networking then knows to route all incoming traffic destined for that floating IP to the instance to which it has been assigned. Note OpenStack Networking allocates floating IP addresses to all projects (tenants) from the same IP ranges/CID Rs. Meaning that every subnet of floating IPs is consumable by any and all projects. You can manage this behavior using quotas for specific projects. For example, you can set the default to 10 for P ro jectA and P ro jectB, while setting P ro jectC ' s quota to 0 .

The Floating IP allocation pool is defined when you create an external subnet. If the subnet only hosts floating IP addresses, consider disabling D HCP allocation with the enabl e_d hcp= Fal seoption: # neutron subnet-create --name SUBNET_NAME --enable_dhcp=False -allocation_pool start=IP_ADDRESS,end=IP_ADDRESS -gateway=IP_ADDRESS NETWORK_NAME CIDR

Examp le 5.1.

# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation_pool start=192.168.100.20,end=192.168.100.100 -gateway=192.168.100.1 public 192.168.100.0/24

5.2.2. Assign a Specific Float ing IP You can assign a specific floating IP address to an instance using the no va command (or through the dashboard; see Section 3.1.2, “ Update an Instance (Actions menu)” ). # nova add-floating-ip INSTANCE_NAME IP_ADDRESS

Examp le 5.2. In this example, a floating IP address is allocated to an instance named co rp-vm-0 1:

96

CHAPT ER 5. NET WO RKING

# nova add-floating-ip corp-vm-01 192.168.100.20

5.2.3. Assign a Random Float ing IP Floating IP addresses can be dynamically allocated to instances. You do not select a particular IP address, but instead request that OpenStack Networking allocates one from the pool. 1. Allocate a floating IP from the previously created pool: # neutron floatingip-create public +---------------------+-------------------------------------+ | Field | Value | +---------------------+-------------------------------------+ | fixed_ip_address | | | floating_ip_address | 192.168.100.20 | | floating_network_id | 7a03e6bc-234d-402b-9fb2-0af06c85a8a3 | | id | 9d7e2603482d | | port_id | | | router_id | | | status | ACTIVE | | tenant_id | 9e67d44eab334f07bf82fa1b17d824b6 | +---------------------+-------------------------------------+

2. With the IP address allocated, you can assign it to a particular instance. Locate the ID of the port associated with your instance (this will match the fixed IP address allocated to the instance). This port ID is used in the following step to associate the instance's port ID with the floating IP address ID . You can further distinguish the correct port ID by ensuring the MAC address in the third column matches the one on the instance. # neutron port-list +--------+------+-------------+-------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------+------+-------------+-------------------------------------------------------+ | ce8320 | | 3e:37:09:4b | {"subnet_id": "361f27", "ip_address": "192.168.100.2"} | | d88926 | | 3e:1d:ea:31 | {"subnet_id": "361f27", "ip_address": "192.168.100.5"} |

97

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

| 8190ab | | 3e:a3:3d:2f | {"subnet_id": "b74dbb", "ip_address": "10.10.1.25"}| +--------+------+-------------+-------------------------------------------------------+

3. Use the neutro n command to associate the floating IP address with the desired port ID of an instance: # neutron floatingip-associate 9d7e2603482d 8190ab

5.2.4 . Creat e Mult iple Float ing IP Pools OpenStack Networking supports one floating IP pool per L3 agent. Therefore, scaling out your L3 agents allows you to create additional floating IP pools. Note Ensure that hand l e_i nternal _o nl y_ro uters in /etc/neutro n/neutro n. co nf is configured to True for only one L3 agent in your environment. This option configures the L3 agent to manage only nonexternal routers.

5.3. BRIDGE T HE PHYSICAL NET WORK The procedure below enables you to bridge your virtual network to the physical network to enable connectivity to and from virtual instances. In this procedure, the example physical eth0 interface is mapped to the br-ex bridge; the virtual bridge acts as the intermediary between the physical network and any virtual networks. As a result, all traffic traversing eth0 uses the configured Open vSwitch to reach instances. 1. Map a physical NIC to the virtual Open vSwitch bridge: # vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes

2. Configure the virtual bridge with the IP address details that were previously allocated to eth0: # vi /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge BOOTPROTO=static IPADDR=192.168.120.10

98

CHAPT ER 5. NET WO RKING

NETMASK=255.255.255.0 GATEWAY=192.168.120.1 DNS1=192.168.120.1 ONBOOT=yes

Where IP AD D R , NET MASK G AT EWAY , and D NS1 (name server) must be updated to match your network. You can now assign floating IP addresses to instances and make them available to the physical network.

99

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

CHAPTER 6. CLOUD RESOURCES This chapter discusses how to configure stacks and monitor cloud resources in RHEL OpenStack Platform.

6.1. MANAGE ST ACKS The Orchestration service provides a framework through which you can define an instance's resource parameters (for example, floating IPs, volumes, or security groups) and properties (for example, key pairs, image to be used, or flavor) using Heat templates. These templates use a human-readable syntax and can be defined in text files (thereby allowing users to check them into version control). Templates allow you to easily deploy and re-configure infrastructure for applications within the OpenStack cloud. Instances deployed using Heat templates through the Orchestration service are known as stacks. The dashboard allows you to launch, delete, and update stacks from Heat templates. You can input a Heat template directly into the dashboard, or use text files from your local file system or HTTP URL.

6.1.1. Download Sample Heat T emplat es Red Hat Enterprise Linux OpenStack Platform includes sample templates you can use to test and study Heat's core functionality. To use these templates, install the openstack-heattemplates package: # yum install openstack-heat-templates

This package installs the sample Heat templates in /usr/share/o penstack-heattempl ates/so ftware-co nfi g /exampl e-templ ates.

6.1.2. Launch a St ack 1. In the dashboard, select P ro ject > O rchestrati o n > Stacks, and click Launch Stack. 2. Select an option from the T empl ate So urce list:

O pt ion

Descript ion

File Us e a lo c al temp late file o n yo ur s ys tem. Selec t yo ur file b y c lic king T empl ate Fi l e > Bro wse . Direc t Inp ut

100

Enter yo ur temp late d irec tly into the d as hb o ard us ing the T empl ate D ata field .

CHAPT ER 6 . CLO UD RESO URCES

O pt ion

Descript ion

URL Us e an external HTTP URL fo r the temp late. Sp ec ify the temp late' s URL in the T empl ate UR L field .

Note Red Hat Enterprise Linux OpenStack Platform includes sample templates. For more details, see Section 6.1.1, “ D ownload Sample Heat Templates” .

3. Select an option from the Envi ro nment So urce list:

O pt ion

Descript ion

File Us e a . yaml file fo r the enviro nment. Selec t yo ur enviro nment b y c lic king Envi ro nment Fi l e > Bro wse . Direc t Inp ut

Enter yo ur enviro nment d ata d irec tly into the d as hb o ard us ing the Envi ro nment D ata field .

4. Click Next. 5. Specify values for the following fields:

Field

Descript ion

Stac k Name

Name to id entify the s tac k.

Creatio n Timeo ut (minutes )

Numb er o f minutes b efo re d ec laring a timeo ut o n the s tac k launc h.

Ro llb ac k O n Failure

If s elec ted , ro lls b ac k any c hang es o r up d ates to the temp late if the s tac k launc h fails .

Pas s wo rd fo r us er USERNAME

Temp o rary p as s wo rd fo r the us er launc hing the s tac k.

The Lau n ch St ack window may also contain other fields, depending on the parameters defined in the template. Update these fields as required. 6. Click Launch.

101

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

6. Click Launch.

6.1.3. Updat e a St ack 1. If stack components need to be updated, edit your original template. 2. In the dashboard, select P ro ject > O rchestrati o n > Stacks. 3. Select the stack's C hang e Stack T empl ate action. 4. Select an option from the T empl ate So urce list:

O pt ion

Descript ion

File Us e a lo c al temp late file o n yo ur s ys tem. Selec t yo ur file b y c lic king T empl ate Fi l e > Bro wse . Direc t Inp ut

Enter yo ur temp late d irec tly into the d as hb o ard us ing the T empl ate D ata field .

URL Us e an external HTTP URL fo r the temp late. Sp ec ify the temp late' s URL in the T empl ate UR L field .

5. Select an option from the Envi ro nment So urce list:

O pt ion

Descript ion

File Us e a . yaml file fo r the enviro nment. Selec t yo ur enviro nment b y c lic king Envi ro nment Fi l e > Bro wse . Direc t Inp ut

Enter yo ur enviro nment d ata d irec tly into the d as hb o ard us ing the Envi ro nment D ata field .

6. Click Next. 7. Specify values for the following fields:

102

CHAPT ER 6 . CLO UD RESO URCES

Field

Descript ion

Creatio n Timeo ut (minutes )

Numb er o f minutes b efo re d ec laring a timeo ut o n the s tac k launc h.

Ro llb ac k O n Failure

If s elec ted , ro lls b ac k any c hang es o r up d ates to the temp late if the s tac k launc h fails .

Pas s wo rd fo r us er USERNAME

Temp o rary p as s wo rd fo r the us er launc hing the s tac k.

The Lau n ch St ack window may also contain other fields, depending on the parameters defined in the template. Update these fields as required. 8. Click Upd ate. The Orchestration service re-launches the stack with the updated parameters. The Upd ated column on the Stacks table now reflects how long it has been since the stack was last updated.

6.1.4 . Delet e a St ack You can delete a stack through the Stacks table: 1. In the dashboard, select P ro ject > O rchestrati o n > Stacks. 2. Select D el ete Stack from the Acti o ns column of a stack. Note Alternatively, you can delete multiple stacks simultaneously by selecting their respective checkboxes and clicking D el ete Stacks.

6.2. USING T HE T ELEMET RY SERVICE For help with the ceilometer command, use: # ceilometer help For help with the subcommands, use: # ceilometer help subcommand

6.2.1. View Exist ing Alarms To list configured Telemetry alarms, use:

103

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

# ceilometer alarm-list

To list configured meters for a resource, use: # ceilometer meter-list --query resource=UUID +--------------------------+------------+-----------+-----------+---------+----------+ | Name | Type | Unit | Resource | User ID | Project | +--------------------------+------------+-----------+-----------+---------+----------+ | cpu | cumulative | ns | 5056eda...| b0e500...| f23524...| | cpu_util | gauge | % | 5056eda...| b0e500...| f23524...| | disk.ephemeral.size | gauge | GB | 5056eda...| b0e500...| f23524...| | disk.read.bytes | cumulative | B | 5056eda...| b0e500...| f23524...| output omitted | instance | gauge | instance | 5056eda...| b0e500...| f23524...| | instance:m1.tiny | gauge | instance | 5056eda...| b0e500...| f23524...| | memory | gauge | MB | 5056eda...| b0e500...| f23524...| | vcpus | gauge | vcpu | 5056eda...| b0e500...| f23524...| +--------------------------+------------+-----------+--------------------------------+

Where UUID is the resource ID for an existing resource (for example, an instance, image, or volume).

6.2.2. Configure an Alarm To configure an alarm to activate when a threshold value is crossed, use the cei l o meter al arm-thresho l d -create command with the following syntax: # ceilometer alarm-threshold-create --name alarm-name [-description alarm-text] --meter-name meter-name --threshold value

Examp le 6 .1. To configure an alarm that activates when the average CPU utilization for an individual instance exceeds 50% for three consecutive 600s (10 minute) periods, use: # ceilometer alarm-threshold-create --name cpu_high --description 'CPU usage high' --meter-name cpu_usage_high --threshold 50 -comparison-operator gt --statistic avg --period 600 --evaluation-

104

CHAPT ER 6 . CLO UD RESO URCES

periods 3 --alarm-action 'log://' --query resource_id=5056eda68a24-4f52-9cc4-c3ddb6fb4a69 In this example, the notification action is a log message.

To edit an existing threshold alarm, use the cei l o meter al arm-thresho l d -upd ate command together with the alarm ID , followed by one or more options to be updated.

Examp le 6 .2. To increase the alarm threshold to 75% , use: # ceilometer alarm-threshold-update 35addb25-d488-4a74-a038076aad3a3dc3 --threshold=75

6.2.3. Disable or Delet e an Alarm To disable an alarm, use: # ceilometer alarm-threshold-update --enabled False ALARM_ID

To delete an alarm, use: # ceilometer alarm-delete ALARM_ID

6.2.4 . View Samples To list all the samples for a particular meter name, use: # ceilometer sample-list --meter METER_NAME

To list samples only for a particular resource within a range of time stamps, use: # ceilometer sample-list --meter METER_NAME --query 'resource_id=INSTANCE_ID;timestamp>START_TIME;timestamp>=END_TIME'

Where START_TIME and END_TIME are in the form iso-dateThh:mm:ss.

Examp le 6 .3. To query an instance for samples taken between 13: 10 : 0 0 and 14 : 25: 0 0 , use: #ceilometer sample-list --meter cpu --query 'resource_id=5056eda6-8a24-4f52-9cc4-c3ddb6fb4a69;timestamp>2015-

105

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

01-12T13:10:00;timestamp>=2015-01-12T14:25:00' +-------------------+------+------------+---------------+------+--------------------+ | Resource ID | Name | Type | Volume | Unit | Timestamp | +-------------------+------+------------+---------------+------+--------------------+ | 5056eda6-8a24-... | cpu | cumulative | 3.5569e+11 | ns | 2015-01-12T14:21:44 | | 5056eda6-8a24-... | cpu | cumulative | 3.0041e+11 | ns | 2015-01-12T14:11:45 | | 5056eda6-8a24-... | cpu | cumulative | 2.4811e+11 | ns | 2015-01-12T14:01:54 | | 5056eda6-8a24-... | cpu | cumulative | 1.3743e+11 | ns | 2015-01-12T13:30:54 | | 5056eda6-8a24-... | cpu | cumulative | 84710000000.0 | ns | 2015-01-12T13:20:54 | | 5056eda6-8a24-... | cpu | cumulative | 31170000000.0 | ns | 2015-01-12T13:10:54 | +-------------------+------+------------+---------------+------+--------------------+

6.2.5. Creat e a Sample Samples can be created for sending to the Telemetry service and they need not correspond to a previously defined meter. Use the following syntax: # ceilometer sample-create --resource_id RESOURCE_ID --meter-name METER_NAME --meter-type METER_TYPE --meter-unit METER_UNIT -sample-volume SAMPLE_VOLUME Where METER_TYPE can be one of: Cumulative - a running total D elta - a change or difference over time Gauge- a discrete value

Examp le 6 .4 .

# ceilometer sample-create -r 5056eda6-8a24-4f52-9cc4c3ddb6fb4a69 -m On_Time_Mins --meter-type cumulative --meter-unit mins --sample-volume 0 +-------------------+-------------------------------------------+ | Property | Value | +-------------------+-------------------------------------------+

106

CHAPT ER 6 . CLO UD RESO URCES

| message_id | 521f138a-9a84-11e4-8058-525400ee874f | | name | On_Time_Mins | | project_id | f2352499957d4760a00cebd26c910c0f | | resource_id | 5056eda6-8a24-4f52-9cc4-c3ddb6fb4a69 | | resource_metadata | {} | | source | f2352499957d4760a00cebd26c910c0f:openstack | | timestamp | 2015-01-12T17:56:23.179729 | | type | cumulative | | unit | mins | | user_id | b0e5000684a142bd89c4af54381d3722 | | volume | 0.0 | +-------------------+-------------------------------------------+ Where vo l ume, normally the value obtained as a result of the sampling action, is in this case the value being created by the command.

Note Samples are not updated because the moment a sample is created, it is sent to the Telemetry service. Samples are essentially messages, which is why they have a message ID . To create new samples, repeat the sampl e-create command and update the --sampl e-vo l ume value.

6.2.6. View Cloud Usage St at ist ics OpenStack administrators can use the dashboard to view cloud statistics. 1. As an admin user in the dashboard, select Ad mi n > System > R eso urce Usag e. 2. Click one of the following: D ai l y R epo rt — View a report of daily usage per project. Select the date range and a limit for the number of projects, and click G enerate R epo rt; the daily usage report is displayed. Stats — View a graph of metrics grouped by project. Select the values and time period using the drop-down menus; the displayed graph is automatically updated. The cei l o meter command line client can also be used for viewing cloud usage statics.

107

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Examp le 6 .5. To view all the statistics for the cpu_uti l meter, use: # ceilometer statistics --meter cpu_util +--------+----------------+---------------+-----+-----+------+------+------+-------| Period | Period Start |Period End | Max | Min | Avg | Sum | Count| Dura... +--------+----------------+---------------+-----+-----+------+------+------+-------| 0 | 2015-01-09T14: |2015-01-09T14:2| 9.44| 0.0 | 6.75 | 337.94| 50 | 2792... +--------+----------------+---------------+-----+-----+------+------+------+--------

Examp le 6 .6 . Statistics can be restricted to a specific resource by means of the --q uery option, and restricted to a specific range by means of the ti mestamp option. # ceilometer statistics --meter cpu_util --query 'resource_id=5056eda6-8a24-4f52-9cc4-c3ddb6fb4a69;timestamp>201501-12T13:00:00;timestamp<=2015-01-13T14:00:00' +--------+----------------+---------------+-----+-----+------+------+------+-------| Period | Period Start |Period End | Max | Min | Avg | Sum | Count| Dura... +--------+----------------+---------------+-----+-----+------+------+------+-------| 0 | 2015-01-12T20:1|2015-01-12T20:1| 9.44| 5.95| 8.90 | 347.10| 39 | 2465... +--------+----------------+---------------+-----+-----+------+------+------+--------

108

CHAPT ER 7 . T RO UBLESHO O T ING

CHAPTER 7. TROUBLESHOOTING This chapter contains logging and support information to assist with troubleshooting your RHEL OpenStack Platform deployment.

7.1. LOGGING RHEL OpenStack Platform writes informational messages to specific log files; you can use these messages for troubleshooting and monitoring system events.

7.1.1. Log Files for OpenSt ack Services Each OpenStack component has a separate logging directory containing files specific to a running service. T ab le 7.1. B lo ck St o rag e ( cin d er) lo g f iles

Service

Service Name

Log Pat h

Blo c k Sto rag e API

o p ens tac k-c ind er-ap i.s ervic e

/var/lo g /c ind er/ap i.lo g

Blo c k Sto rag e Bac kup

o p ens tac k-c ind er-b ac kup .s ervic e

/var/lo g /c ind er/b ac kup .lo g

Info rmatio nal mes s ag es

The c ind er-manag e c o mmand

/var/lo g /c ind er/c ind ermanag e.lo g

Blo c k Sto rag e Sc hed uler

o p ens tac k-c ind er-s c hed uler.s ervic e

/var/lo g /c ind er/s c hed uler. lo g

Blo c k Sto rag e Vo lume

o p ens tac k-c ind er-vo lume.s ervic e

/var/lo g /c ind er/vo lume.lo g

T ab le 7.2. C o mp u t e ( n o va) lo g f iles

Service

Service Name

Log Pat h

O p enStac k Co mp ute API s ervic e

o p ens tac k-no vaap i.s ervic e

/var/lo g /no va/no vaap i.lo g

109

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Service

Service Name

Log Pat h

O p enStac k Co mp ute c ertific ate s erver

o p ens tac k-no vac ert.s ervic e

/var/lo g /no va/no vac ert.lo g

O p enStac k Co mp ute s ervic e

o p ens tac k-no vac o mp ute.s ervic e

/var/lo g /no va/no vac o mp ute.lo g

O p enStac k Co mp ute Co nd uc to r s ervic e

o p ens tac k-no vac o nd uc to r.s ervic e

/var/lo g /no va/no vac o nd uc to r.lo g

O p enStac k Co mp ute VNC c o ns o le authentic atio n s erver

o p ens tac k-no vac o ns o leauth.s ervic e

/var/lo g /no va/no vac o ns o leauth.lo g

Info rmatio nal mes s ag es

no va-manag e c o mmand

/var/lo g /no va/no vamanag e.lo g

O p enStac k Co mp ute No VNC Pro xy s ervic e

o p ens tac k-no vano vnc p ro xy.s ervic e

/var/lo g /no va/no vano vnc p ro xy.lo g

O p enStac k Co mp ute Sc hed uler s ervic e

o p ens tac k-no vas c hed uler.s ervic e

/var/lo g /no va/no vas c hed uler.lo g

T ab le 7.3. D ash b o ard ( h o riz o n ) lo g f iles

Service

Service Name

Log Pat h

Lo g o f c ertain us er interac tio ns

Das hb o ard interfac e

/var/lo g /ho riz o n/ho riz o n.l og

T ab le 7.4 . Id en t it y Service ( keyst o n e) lo g f iles

Service

Service Name

Log Pat h

O p enStac k Id entity Servic e

o p ens tac kkeys to ne.s ervic e

/var/lo g /keys to ne/keys to n e.lo g

T ab le 7.5. Imag e Service ( g lan ce) lo g f iles

110

CHAPT ER 7 . T RO UBLESHO O T ING

Service

Service Name

Log Pat h

O p enStac k Imag e Servic e API s erver

o p ens tac k-g lanc eap i.s ervic e

/var/lo g /g lanc e/ap i.lo g

O p enStac k Imag e Servic e Reg is try s erver

o p ens tac k-g lanc ereg is try.s ervic e

/var/lo g /g lanc e/reg is try.lo g

T ab le 7.6 . O p en St ack N et wo rkin g ( n eu t ro n ) lo g f iles

Service

Service Name

Log Pat h

O p enStac k Netwo rking Layer 3 Ag ent

neutro n-l3-ag ent.s ervic e

/var/lo g /neutro n/l3ag ent.lo g

O p en vSwitc h ag ent

neutro n-o p envs witc hag ent.s ervic e

/var/lo g /neutro n/o p envs wi tc h-ag ent.lo g

Metad ata ag ent s ervic e

neutro n-metad ataag ent.s ervic e

/var/lo g /neutro n/metad ata -ag ent.lo g

O p enStac k Netwo rking s ervic e

neutro n-s erver.s ervic e

/var/lo g /neutro n/s erver.lo g

T ab le 7.7. T elemet ry ( ceilo met er) lo g f iles

Service

Service Name

Log Pat h

O p enStac k c eilo meter no tific atio n ag ent

o p ens tac k-c eilo meterno tific atio n.s ervic e

/var/lo g /c eilo meter/ag entno tific atio n.lo g

O p enStac k c eilo meter alarm evaluatio n

o p ens tac k-c eilo meter-alarmevaluato r.s ervic e

/var/lo g /c eilo meter/alarmevaluato r.lo g

O p enStac k c eilo meter alarm no tific atio n

o p ens tac k-c eilo meter-alarmno tifier.s ervic e

/var/lo g /c eilo meter/alarmno tifier.lo g

111

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Service

Service Name

Log Pat h

O p enStac k c eilo meter API

o p ens tac k-c eilo meter-ap i.s ervic e

/var/lo g /c eilo meter/ap i.lo g

Info rmatio nal mes s ag es

Mo ng o DB integ ratio n

/var/lo g /c eilo meter/c eilo meter-d b s ync .lo g

O p enStac k c eilo meter c entral ag ent

o p ens tac k-c eilo meterc entral.s ervic e

/var/lo g /c eilo meter/c entra l.lo g

O p enStac k c eilo meter c o llec tio n

o p ens tac k-c eilo meterc o llec to r.s ervic e

/var/lo g /c eilo meter/c o llec to r.lo g

O p enStac k c eilo meter c o mp ute ag ent

o p ens tac k-c eilo meterc o mp ute.s ervic e

/var/lo g /c eilo meter/c o mp ute.lo g

T ab le 7.8. O rch est rat io n ( h eat ) lo g f iles

Service

Service Name

Log Pat h

O p enStac k Heat API Servic e

o p ens tac k-heatap i.s ervic e

/var/lo g /heat/heat-ap i.lo g

O p ens tac k Heat Eng ine Servic e

o p ens tac k-heateng ine.s ervic e

/var/lo g /heat/heateng ine.lo g

O rc hes tratio n s ervic e events

n/a

/var/lo g /heat/heatmanag e.lo g

7.1.2. Configure logging opt ions Each component maintains its own separate logging configuration in its respective configuration file. For example, in Compute, these options are set in /etc/no va/no va. co nf: Increase the level of informational logging by enabling debugging. This option greatly increases the amount of information captured, so you may want to consider using it only temporarily, or first reviewing your log rotation settings.

112

CHAPT ER 7 . T RO UBLESHO O T ING

debug=True

Enable verbose logging: verbose=True

Change the log file path: log_dir=/var/log/nova

Send your logs to a central syslog server: use_syslog=True syslog_log_facility=LOG_USER Note Options are also available for timestamp configuration and log formatting, among others. Review the component's configuration file for additional logging options.

7.2. SUPPORT If client commands fail or you run into other issues, please contact Red Hat Technical Support with a description of what happened, an so srepo rt, the full console output, and all log files referenced in the console output. For information about the so srepo rt command (so s package), refer to What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later.

113

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

APPENDIX A. IMAGE CONFIGURATION PARAMETERS The following keys can be used with the pro perty option for both the g l ance i mag eupd ate and g l ance i mag e-create commands.

Examp le A.1.

$ glance image-update IMG-UUID --property architecture=x86_64

Note Behavior set using image properties overrides behavior set using flavors. For more information, see Section 3.3, “ Manage Flavors” .

T ab le A.1. Pro p ert y keys

Specific to

Key

Descript ion

All

arc hitec t ure

The CPU arc hitec ture that mus t b e s up p o rted b y the hyp ervis o r. Fo r examp le, x86 _6 4 , arm , o r ppc6 4 . Run uname -m to g et the arc hitec ture o f a mac hine. We s tro ng ly rec o mmend us ing the arc hitec ture d ata vo c ab ulary d efined b y the lib o s info p ro jec t fo r this p urp o s e.

Support ed values

al pha -DEC 6 4-b it RISC armv7l -ARM Co rtex-A7 MPCo re

cri s -Ethernet, To ken Ring , AXis -Co d e Red uc ed Ins truc tio n Set

i 6 86 -Intel s ixth-g eneratio n x8 6 (P6 mic ro arc hitec ture)

i a6 4 -Itanium l m32 -Lattic e Mic ro 32 m6 8k -Mo to ro la 6 8 0 0 0 mi cro bl aze -Xilinx 32-b it FPG A (Big End ian)

mi cro bl azeel -Xilinx 32-b it FPG A (Little End ian)

mi ps -MIPS 32-b it RISC (Big End ian)

114

APPENDIX A. IMAG E CO NFIG URAT IO N PARAMET ERS

Specific to

Key

Descript ion

mi psel -MIPS 32-b it RISC Support (Little ed Endvalues ian) mi ps6 4 -MIPS 6 4-b it RISC (Big End ian)

mi ps6 4 el -MIPS 6 4-b it RISC (Little End ian)

o penri sc -O p enCo res RISC pari sc -HP Prec is io n Arc hitec ture RISC

pari sc6 4 -HP Prec is io n Arc hitec ture 6 4-b it RISC

ppc -Po werPC 32-b it ppc6 4 -Po werPC 6 4-b it ppcemb -Po werPC (Emb ed d ed 32-b it)

s39 0 -IBM Enterp ris e Sys tems Arc hitec ture/39 0

s39 0 x -S/39 0 6 4-b it sh4 -Sup erH SH-4 (Little End ian)

sh4 eb -Sup erH SH-4 (Big End ian)

sparc -Sc alab le Pro c es s o r Arc hitec ture, 32-b it

sparc6 4 -Sc alab le Pro c es s o r Arc hitec ture, 6 4-b it

uni co re32 -Mic ro p ro c es s o r Res earc h and Develo p ment Center RISC Unic o re32

x86 _6 4 -6 4-b it extens io n o f IA-32

xtensa -Tens ilic a Xtens a c o nfig urab le mic ro p ro c es s o r c o re

115

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Specific to

Key

Descript ion

xtensaeb -Tens ilic a Xtens a Support values c o nfiged urab le mic ro p ro c es s o r c o re (Big End ian) Not e The archi tecture o p tio ns fully s up p o rted b y Red Hat are i 6 86 and x86 _6 4 .

All

hyp ervis o r_typ e

The hyp ervis o r typ e.

kvm , vmware

All

ins tanc e _uuid

Fo r s nap s ho t imag es , this is the UUID o f the s erver us ed to c reate this imag e.

Valid s erver UUID

All

kernel_i d

The ID o f an imag e s to red in the Imag e Servic e that s ho uld b e us ed as the kernel when b o o ting an AMI-s tyle imag e.

Valid imag e ID

All

o s _d is tr o

The c o mmo n name o f the o p erating s ys tem d is trib utio n in lo werc as e (us es the s ame d ata vo c ab ulary as the lib o s info p ro jec t). Sp ec ify o nly a rec o g niz ed value fo r this field . Dep rec ated values are lis ted to as s is t yo u in s earc hing fo r the rec o g niz ed value.

arch -Arc h Linux. Do no t us e archl i nux o r o rg . archl i nux cento s -Co mmunity Enterp ris e O p erating Sys tem. Do no t us e o rg . cento s o r

C entO S d ebi an -Deb ian. Do no t us e D ebi an o r o rg . d ebi an fed o ra -Fed o ra. Do no t us e Fed o ra , o rg . fed o ra , o r o rg . fed o rapro ject freebsd -FreeBSD. Do no t us e o rg . freebsd , freeBSD , o r FreeBSD g ento o -G ento o Linux. Do no t us e G ento o o r o rg . g ento o mand rake -Mand rakelinux (Mand rakeSo ft) d is trib utio n. Do no t us e mand rakel i nux o r Mand rakeLi nux

116

APPENDIX A. IMAG E CO NFIG URAT IO N PARAMET ERS

Specific to

Key

Descript ion

mand ri va -Mand riva Linux. Support Do noed t usvalues e mand ri val i nux mes -Mand riva Enterp ris e Server. Do no t us e mand ri vaent o r

mand ri vaES msd o s -Mic ro s o ft Dis c O p erating Sys tem. Do no t us e

ms-d o s netbsd -NetBSD. Do no t us e NetBSD o r o rg . netbsd netware -No vell NetWare. Do no t us e no vel l o r NetWare o penbsd -O p enBSD. Do no t us e O penBSD o r o rg . o penbsd o penso l ari s -O p enSo laris . Do no t us e O penSo l ari s o r o rg . o penso l ari s o pensuse -o p enSUSE. Do no t us e suse , SuSE , o r o rg . o pensuse rhel -Red Hat Enterp ris e Linux. Do no t us e red hat, R ed Hat, o r co m. red hat sl ed -SUSE Linux Enterp ris e Des kto p . Do no t us e

co m. suse ubuntu -Ub untu. Do no t us e Ubuntu , co m. ubuntu , o rg . ubuntu , o r cano ni cal wi nd o ws -Mic ro s o ft Wind o ws . Do no t us e

co m. mi cro so ft. server

All

o s _vers i on

The o p erating s ys tem vers io n as s p ec ified b y the d is trib uto r.

Vers io n numb er (fo r examp le, " 11.10 " )

117

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Specific to

Key

Descript ion

Support ed values

All

ramd is k _id

The ID o f imag e s to red in the Imag e Servic e that s ho uld b e us ed as the ramd is k when b o o ting an AMI-s tyle imag e.

Valid imag e ID

All

vm_mo d e

The virtual mac hine mo d e. This rep res ents the ho s t/g ues t ABI (ap p lic atio n b inary interfac e) us ed fo r the virtual mac hine.

hvm -Fully virtualiz ed . This is the mo d e us ed b y Q EMU and KVM.

lib virt API d river

hw_d is k _b us

Sp ec ifies the typ e o f d is k c o ntro ller to attac h d is k d evic es to .

scsi , vi rti o , i d e , o r usb .

lib virt API d river

hw_num a_no d e s

Numb er o f NUMA no d es to exp o s e to the ins tanc e (d o es no t o verrid e flavo r d efinitio n).

Integ er. Fo r a d etailed examp le o f NUMA-to p o lo g y d efinitio n, refer to the hw: NUMA_d ef key in Sec tio n 3.3.4.2, “ Ad d Metad ata” .

lib virt API d river

hw_num a_memp o lic y

NUMA memo ry allo c atio n p o lic y (d o es no t o verrid e flavo r d efinitio n).

s tric t - Mand ato ry fo r the ins tanc e' s RAM allo c atio ns to c o me fro m the NUMA no d es to whic h it is b o und (d efault if numa_no d es is s p ec ified ). p referred - The kernel c an fall b ac k to us ing an alternative no d e. Us eful when the ' hw:numa_no d es ' p arameter is s et to ' 1' .

118

lib virt API d river

hw_num a_c p us . 0

Map p ing o f vCPUs N-M to NUMA no d e 0 (d o es no t o verrid e flavo r d efinitio n).

Co mma-s ep arated lis t o f integ ers .

lib virt API d river

hw_num a_c p us . 1

Map p ing o f vCPUs N-M to NUMA no d e 1 (d o es no t o verrid e flavo r d efinitio n).

Co mma-s ep arated lis t o f integ ers .

lib virt API d river

hw_num a_mem. 0

Map p ing N G B o f RAM to NUMA no d e 0 (d o es no t o verrid e flavo r d efinitio n).

Integ er

lib virt API d river

hw_num a_mem. 1

Map p ing N G B o f RAM to NUMA no d e 1 (d o es no t o verrid e flavo r d efinitio n).

Integ er

APPENDIX A. IMAG E CO NFIG URAT IO N PARAMET ERS

Specific to

Key

Descript ion

Support ed values

lib virt API d river

hw_rng _ mo d el

Ad d s a rand o m-numb er g enerato r d evic e to the imag e' s ins tanc es . The c lo ud ad minis trato r c an enab le and c o ntro l d evic e b ehavio r b y c o nfig uring the ins tanc e' s flavo r. By d efault:

vi rti o , o r o ther s up p o rted d evic e.

The g enerato r d evic e is d is ab led .

/d ev/rand o m is us ed as the d efault entro p y s o urc e. To s p ec ify a p hys ic al HW RNG d evic e, us e the fo llo wing o p tio n in the no va. co nf file:

rng_dev_path=/dev/hwr ng

lib virt API d river

hw_s c s i _mo d el

Enab les the us e o f VirtIO SCSI (virtio -s c s i) to p ro vid e b lo c k d evic e ac c es s fo r c o mp ute ins tanc es ; b y d efault, ins tanc es us e VirtIO Blo c k (virtio -b lk). VirtIO SCSI is a p ara-virtualiz ed SCSI c o ntro ller d evic e that p ro vid es imp ro ved s c alab ility and p erfo rmanc e, and s up p o rts ad vanc ed SCSI hard ware.

vi rti o -scsi

lib virt API d river

hw_vid e o _mo d e l

The vid eo imag e d river us ed .

vg a , ci rrus , vmvg a , xen , o r q xl

lib virt API d river

hw_vid e o _ram

Maximum RAM fo r the vid eo imag e. Us ed o nly if a

Integ er in MB (fo r examp le, ' 6 4' )

hw_vi d eo : ram_max_mb value has b een s et in the flavo r' s extra_specs and that value is hig her than the value s et in hw_vi d eo _ram .

119

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

Specific to

Key

Descript ion

lib virt API d river

hw_watc hd o g _a c tio n

Enab les a virtual hard ware watc hd o g d evic e that c arries o ut the s p ec ified ac tio n if the s erver hang s . The watc hd o g us es the i6 30 0 es b d evic e (emulating a PCI Intel 6 30 0 ESB). If hw_watchd o g _acti o n is no t s p ec ified , the watc hd o g is d is ab led .

Support ed values

d i sabl ed -The d evic e is no t attac hed . Allo ws the us er to d is ab le the watc hd o g fo r the imag e, even if it has b een enab led us ing the imag e' s flavo r. The d efault value fo r this p arameter is d i sabl ed .

reset-Fo rc efully res et the g ues t.

po wero ff -Fo rc efully p o wer o ff the g ues t.

pause -Paus e the g ues t. no ne -O nly enab le the watc hd o g ; d o no thing if the s erver hang s .

lib virt API d river

o s _c o m mand _li ne

The kernel c o mmand line to b e us ed b y the lib virt d river, ins tead o f the d efault. Fo r Linux Co ntainers (LXC), the value is us ed as arg uments fo r initializ atio n. This key is valid o nly fo r Amaz o n kernel, ramd is k, o r mac hine imag es (aki, ari, o r ami).

lib virt API d river and VMware API d river

hw_vif_ mo d el

Sp ec ifies the mo d el o f virtual netwo rk interfac e d evic e to us e.

The valid o p tio ns d ep end o n the c o nfig ured hyp ervis o r. KVM and Q EMU: e10 0 0 , ne2k_pci , pcnet, rtl 8139 , and vi rti o . VMware: e10 0 0 , e10 0 0 e , Vi rtual E10 0 0 , Vi rtual E10 0 0 e , Vi rtual P C Net32 ,

Vi rtual Sri o vEthernetC a rd , and Vi rtual Vmxnet. Xen: e10 0 0 , netfro nt, ne2k_pci , pcnet, and rtl 8139 .

120

APPENDIX A. IMAG E CO NFIG URAT IO N PARAMET ERS

Specific to

Key

Descript ion

Support ed values

VMware API d river

vmware_ ad ap tert yp e

The virtual SCSI o r IDE c o ntro ller us ed b y the hyp ervis o r.

l si Lo g i c , busLo g i c , o r ide

VMware API d river

vmware_ o s typ e

A VMware G ues tID whic h d es c rib es the o p erating s ys tem ins talled in the imag e. This value is p as s ed to the hyp ervis o r when c reating a virtual mac hine. If no t s p ec ified , the key d efaults to o therG uest.

See thinkvirt.c o m.

VMware API d river

vmware_ imag e_v ers io n

Currently unus ed .

1

XenAPI d river

auto _d i s k_c o nfi g

If true, the ro o t p artitio n o n the d is k is auto matic ally res iz ed b efo re the ins tanc e b o o ts . This value is o nly taken into ac c o unt b y the Co mp ute s ervic e when us ing a Xen-b as ed hyp ervis o r with the XenAPI d river. The Co mp ute s ervic e will o nly attemp t to res iz e if there is a s ing le p artitio n o n the imag e, and o nly if the p artitio n is in ext3 o r ext4 fo rmat.

true | fal se

XenAPI d river

o s _typ e

The o p erating s ys tem ins talled o n the imag e. The XenAPI d river c o ntains lo g ic that takes d ifferent ac tio ns d ep end ing o n the value o f the o s_type p arameter o f the imag e. Fo r examp le, fo r o s_type= wi nd o ws imag es , it c reates a FAT32-b as ed s wap p artitio n ins tead o f a Linux s wap p artitio n, and it limits the injec ted ho s t name to les s than 16 c harac ters .

l i nux o r wi nd o ws

121

Red Hat Ent erprise Linux O penSt ack Plat form 6 Administ rat ion G uide

APPENDIX B. REVISION HISTORY Revision 6 .0 .4 - 2

Mon Aug 0 3 2 0 1 5

Deept i Navale

B Z#12 2 0 19 9 - Rem o ved th e i n sta l l ' g i t' p a cka g e fr o m th e RHEL 7 i m a g e cr ea te p r o ced u r e.

Revision 6 .0 .4 - 1

Wed Jul 1 5 2 0 1 5

Deept i Navale

B Z#12 2 0 19 9 - Ad d ed th e m i ssi n g i m a g es r eq u i r ed fo r th e i m a g e-cr ea te p r o ced u r es.

Revision 6 .0 .4 - 0

T ue Jul 1 4 2 0 1 5

Deept i Navale

B Z#12 2 0 19 9 - In cl u d ed p r o ced u r e fo r cr ea ti n g a RHEL 6 i m a g e a l o n g wi th so m e u p d a tes to th e RHEL 7 p r o ced u r e.

Revision 6 .0 .3- 0

Fri Jun 1 9 2 0 1 5

Don Domingo

B Z#10 8 2 8 5 7 - Ad d ed d escr i p ti o n o f h o w th e B l o ck S to r a g e sch ed u l er a l l o ca tes vo l u m es b y d efa u l t i n a m u l ti -b a ck en d en vi r o n m en t ( wi th o u t vo l u m e typ es o r co n fi g u r ed fi l ter s) .

Revision 6 .0 .2 - 4

T ue Jun 0 1 2 0 1 5

Don Domingo

B Z#12 2 7 12 3 - Ad d ed i n str u cti o n s o n h o w to d el ete sn a p sh o ts, a l o n g wi th cr o ss-r efer en ces to r el a ted Cep h i n str u cti o n s to p r o tect/u n p r o tect a sn a p sh o t i n th e b a ck en d .

Revision 6 .0 .2 - 3

Fri May 1 5 2 0 1 5

Summer Long

B Z#12 0 6 3 9 5 - Ad d ed secti o n 3 .1.5 .2 . Di r ectl y Co n n ect to th e VNC Co n so l e. B Z#119 4 113 - Cl a r i fi ed su p p o r ted vo l u m e en cr yp ti o n setti n g s. B Z#118 2 4 0 6 - Ad d ed secti o n 3 .1.5 .3 . Di r ectl y Co n n ect to a S er i a l Co n so l e.

Revision 6 .0 .2 - 2

T hu Apr 2 3 2 0 1 5

Summer Long

B Z#118 2 8 17 - Up d a ted sch ed u l i n g fi l ter s, fl a vo r m eta d a ta , a n d i m a g e m eta d a ta fo r sch ed u l i n g i n sta n ces u si n g NUMA to p o l o g y d efi n i ti o n s.

Revision 6 .0 .2 - 1

T hu Apr 1 6 2 0 1 5

Summer Long

B Z#119 0 5 6 0 - Ad d ed Acti o n s ta b l e to 3 .1.2 . Up d a te a n In sta n ce.

Revision 6 .0 .2 - 0

T hu Apr 9 2 0 1 5

Don Domingo

B Z#119 4 116 - Cl a r i fi ed th a t r ea d er s n eed to co n su l t d r i ver d o cu m en ta ti o n fo r va l i d Extra Specs key/va l u e p a i r s. Al so a d d ed l i n ks to sa m p l e p r o ced u r es wh er e vo l u m e typ es a n d extr a sp ecs a r e u sed .

Revision 6 .0 .1 - 3

T ue Apr 7 2 0 1 5

Summer Long

B Z#12 0 9 3 3 0 - d i sk_a l l o ca ti o n _r a ti o a n d Ag g r eg a teDi skFi l ter d escr i p ti o n s u p d a ted , p l u s m i n o r ed i ts fo r cl a r i fi ca ti o n .

Revision 6 .0 .1 - 2

T hu Mar 1 9 2 0 1 5

Mart in Lopes

B Z#116 3 7 2 6 - Ad d ed n o te d escr i b i n g Fl o a ti n g IP a l l o ca ti o n b eh a vi o r wh en u si n g m u l ti p l e p r o j ects ( m l o p es) .

Revision 6 .0 .1 - 1

T ue Mar 1 7 2 0 1 5

Summer Long

B Z#114 7 7 9 4 - Up d a ted S S H Tu n n el i n g secti o n wi th exp l i ci t co p yi n g i n str u cti o n s ( sl o n g ) . B Z#119 4 5 3 9 - Ad d ed i n fo r m a ti o n o n sa m p l e tem p l a tes ( d d o m i n g o ) . B Z#119 3 7 4 9 - Up d a ted Im a g e a n d S to r a g e ch a p ter i n tr o d u cti o n ( d n a va l e) .

Revision 6 .0 .1 - 0

122

T hu Mar 5 2 0 1 5

Summer Long

APPENDIX B. REVISIO N HIST O RY

Fi n a l i z ed fo r m a i n ten a n ce r el ea se 6 .0 .1. B Z#119 17 9 4 - S tr u ctu r a l ed i ts fo r en ti r e g u i d e.

Revision 6 .0 .0 - 6

Wed Feb 1 8 2 0 1 5

Don Domingo

B Z#119 4 112 - Ad d ed m i n i -secti o n o n sel ecti n g a b a ck en d B Z#10 4 16 9 6 - Ad d ed "Co n fi g u r e Ho w Vo l u m es a r e Al l o ca ted to Mu l ti p l e B a ck En d s". B Z#119 0 6 6 1 - Ad d ed "Up l o a d a Vo l u m e to th e Im a g e S er vi ce".

Revision 6 .0 .0 - 5

T hu Feb 1 2 2 0 1 5

Summer Long

B Z#119 17 7 6 - Rem o ved b a d ta b l e ti tl es i n Vo l u m e secti o n .

Revision 6 .0 .0 - 4

T hu Feb 5 2 0 1 5

Summer Long

Rel ea se fo r Red Ha t En ter p r i se Li n u x O p en S ta ck P l a tfo r m 6 .0 .

123

Loading...

Red Hat Enterprise Linux OpenStack Platform 6 Administration Guide

Red Hat Enterprise Linux OpenStack Platform 6 Administration Guide Managing a Red Hat Enterprise Linux OpenStack Platform environment OpenStack Docu...

NAN Sizes 21 Downloads 17 Views

Recommend Documents

OpenStack Courses - Linux Academy Red Hat OpenStack
Stephen Smith has spent nearly a decade helping students to understand, use and excel with modern cloud computing techno

Red Hat OpenStack Administration II
Install, configure, and maintain Red Hat OpenStack Platform. Red Hat®OpenStack®Administration II (CL210) teaches syste

Red Hat OpenStack Administration (CL210) -
An IT professional who has earned the Red Hat Certificate of Expertise in Infrastructure-as-a-Service has demonstrated t

CL210 Red Hat OpenStack Administration - Linux Learning Centre
Oct 24, 2017 - This course teaches system administrators how to implement a cloud-computing environment using Red Hat®

QA-Red Hat OpenStack Administration - Scribd
Note: Red Hat OpenStack Administration (CL210) is one of our new emerging technology courses. This series of courses foc

RED HAT ENTERPRISE LINUX: IDENTITY MANAGEMENT
Identity management (IdM) in Red Hat Enterprise Linux presents a unifying umbrella for standards- defined, common networ

Red Hat Linux Administration Video Tutorials - WordPress.com
Jul 1, 2015 - Get started with a step-by-step guide using Red Hat Enterprise Linux As outlined in the Red Hat OpenStack

Red Hat Certified System Administrator in Red Hat OpenStack
The Red Hat® Certified System Administrator (RHCSA) in Red Hat OpenStack® exam (EX210) tests candidates skills, knowle

cl210 red hat openstack administration cl210 - Find Related Free PDF
Mar 13, 2016 - Get an overview of the Red Hat Enterprise Linux OpenStack Platform architecture . Install Red Hat ... CL2

Red Hat OpenStack Administration II (CL210) | Learning Tree
This course can also help you prepare for the Red Hat Certified System Administrator in Red Hat OpenStack exam (EX210).