The setup presumes that you have 6 or more bare metal servers already setup with network connectivity on at least 1 or more network interfaces for all servers via a TOR switch or other network implementation.
The physical TOR switches are not automatically configured from the OPNFV reference platform. All the networks involved in the OPNFV infrastructure as well as the provider networks and the private tenant VLANs needs to be manually configured.
The Jump Host can be installed using the bootable ISO or by using the
(opnfv-apex*.rpm
) RPMs and their dependencies. The Jump Host should then
be configured with an IP gateway on its admin or public interface and
configured with a working DNS server. The Jump Host should also have routable
access to the lights out network for the overcloud nodes.
opnfv-deploy
is then executed in order to deploy the undercloud VM and to
provision the overcloud nodes. opnfv-deploy
uses three configuration files
in order to know how to install and provision the OPNFV target system.
The information gathered under section
Execution Requirements (Bare Metal Only) is put into the YAML file
/etc/opnfv-apex/inventory.yaml
configuration file. Deployment options are
put into the YAML file /etc/opnfv-apex/deploy_settings.yaml
. Alternatively
there are pre-baked deploy_settings files available in /etc/opnfv-apex/
.
These files are named with the naming convention
os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in place
of the /etc/opnfv-apex/deploy_settings.yaml
file if one suites your
deployment needs. Networking definitions gathered under section
Network Requirements are put into the YAML file
/etc/opnfv-apex/network_settings.yaml
. opnfv-deploy
will boot the
undercloud VM and load the target deployment configuration into the
provisioning toolchain. This information includes MAC address, IPMI,
Networking Environment and OPNFV deployment options.
Once configuration is loaded and the undercloud is configured it will then reboot the overcloud nodes via IPMI. The nodes should already be set to PXE boot first off the admin interface. The nodes will first PXE off of the undercloud PXE server and go through a discovery/introspection process.
Introspection boots off of custom introspection PXE images. These images are designed to look at the properties of the hardware that is being booted and report the properties of it back to the undercloud node.
After introspection the undercloud will execute a Heat Stack Deployment to continue node provisioning and configuration. The nodes will reboot and PXE from the undercloud PXE server again to provision each node using Glance disk images provided by the undercloud. These disk images include all the necessary packages and configuration for an OPNFV deployment to execute. Once the disk images have been written to node’s disks the nodes will boot locally and execute cloud-init which will execute the final node configuration. At this point in the deployment, the Heat Stack will complete, and Mistral will takeover the configuration of the nodes. Mistral handles calling Ansible which will connect to each node, and begin configuration. This configuration includes launching the desired OPNFV services as containers, and generating their configuration files. These configuration is largely completed by executing a puppet apply on each container to generate the config files, which are then stored on the overcloud host and mounted into the service container at runtime.
This section goes step-by-step on how to correctly install and provision the OPNFV target system to bare metal nodes.
sudo yum -y groupinstall "Virtualization Host"
chkconfig libvirtd on && reboot
to install virtualization support and enable libvirt on boot. If you use
the CentOS 7 DVD proceed to step 1b once the CentOS 7 with
“Virtualization Host” support is completed.installing OPNFV CentOS 7. The ISO comes prepared to be written directly to a USB drive with dd as such:
dd if=opnfv-apex.iso of=/dev/sdX bs=4M
Replace /dev/sdX with the device assigned to your usb drive. Then select the USB device as the boot media on your Jump Host
2a. Install these repos:
sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-queens/rdo-release-queens-1.noarch.rpm
sudo yum install epel-release
sudo curl -o /etc/yum.repos.d/opnfv-apex.repo http://artifacts.opnfv.org/apex/gambia/opnfv-apex.repo
The RDO Project release repository is needed to install OpenVSwitch, which is a dependency of opnfv-apex. If you do not have external connectivity to use this repository you need to download the OpenVSwitch RPM from the RDO Project repositories and install it with the opnfv-apex RPM. The opnfv-apex repo hosts all of the Apex dependencies which will automatically be installed when installing RPMs, but will be pre-installed with the ISO.
TripleO RPMs https://www.opnfv.org/software/downloads
. The dependent
RPMs will be automatically installed from the opnfv-apex repo in the
previous step.
The following RPMs are available for installation:
** These RPMs are not yet distributed by CentOS or EPEL.
Apex has built these for distribution with Apex while CentOS and EPEL do
not distribute them. Once they are carried in an upstream channel Apex will
no longer carry them and they will not need special handling for
installation. You do not need to explicitly install these as they will be
automatically installed by installing python34-opnfv-apex when the
opnfv-apex.repo has been previously downloaded to /etc/yum.repos.d/
.
Install the required RPM (replace <rpm> with the actual downloaded
artifact):
yum -y install <python34-opnfv-apex>
/etc/resolv.conf
to point to a DNS server
(8.8.8.8 is provided by Google).IPMI configuration information gathered in section
Execution Requirements (Bare Metal Only) needs to be added to the
inventory.yaml
file.
Copy /usr/share/doc/opnfv/inventory.yaml.example
as your inventory file
template to /etc/opnfv-apex/inventory.yaml
.
The nodes dictionary contains a definition block for each baremetal host that will be deployed. 0 or more compute nodes and 1 or 3 controller nodes are required (the example file contains blocks for each of these already). It is optional at this point to add more compute nodes into the node list. By specifying 0 compute nodes in the inventory file, the deployment will automatically deploy “all-in-one” nodes which means the compute will run along side the controller in a single overcloud node. Specifying 3 control nodes will result in a highly-available service model.
Edit the following values for each node:
mac_address
: MAC of the interface that will PXE boot from undercloudipmi_ip
: IPMI IP Addressipmi_user
: IPMI usernameipmi_password
: IPMI passwordpm_type
: Power Management driver to use for the nodecpus
: (Introspected*) CPU cores availablememory
: (Introspected*) Memory available in Mibdisk
: (Introspected*) Disk space available in Gbdisk_device
: (Opt***) Root disk device to use for installationarch
: (Introspected*) System architecturecapabilities
: (Opt**) Node’s role in deployment* Introspection looks up the overcloud node’s resources and overrides these value. You can leave default values and Apex will get the correct values when it runs introspection on the nodes.
** If capabilities profile is not specified then Apex will select node’s roles in the OPNFV cluster in a non-deterministic fashion.
*** disk_device declares which hard disk to use as the root device for installation. The format is a comma delimited list of devices, such as “sda,sdb,sdc”. The disk chosen will be the first device in the list which is found by introspection to exist on the system. Currently, only a single definition is allowed for all nodes. Therefore if multiple disk_device definitions occur within the inventory, only the last definition on a node will be used for all nodes.
Edit the 2 settings files in /etc/opnfv-apex/. These files have comments to help you customize them.
/etc/opnfv-apex/
). These files are named with the naming convention
os-sdn_controller-enabled_feature-[no]ha.yaml. These files can be used in
place of the (/etc/opnfv-apex/deploy_settings.yaml
) file if one suites
your deployment needs. If a pre-built deploy_settings file is chosen there
is no need to customize (/etc/opnfv-apex/deploy_settings.yaml
). The
pre-built file can be used in place of the
(/etc/opnfv-apex/deploy_settings.yaml
) file.opnfv-deploy
¶You are now ready to deploy OPNFV using Apex!
opnfv-deploy
will use the inventory and settings files to deploy OPNFV.
Follow the steps below to execute:
sudo opnfv-deploy -n network_settings.yaml
-i inventory.yaml -d deploy_settings.yaml
If you need more information about the options that can be passed to
opnfv-deploy use opnfv-deploy --help
. -n
network_settings.yaml allows you to customize your networking topology.
Note it can also be useful to run the command with the --debug
argument which will enable a root login on the overcloud nodes with
password: ‘opnfvapex’. It is also useful in some cases to surround the
deploy command with nohup
. For example:
nohup <deploy command> &
, will allow a deployment to continue even if
ssh access to the Jump Host is lost during deployment.