Jetpack: A User-Friendly Tool for OpenStack Deployment

Asma Suhani S H
3 min readOct 22, 2020

Hi,

So the first thought you may have, is Jetpack yet another tool for Openstack baremetal deployment?

Yes, but it eases out the deployment without the need for the user to bother about the hardware and templates.

In this blog, we will look into the structure of Jetpack

What is Jetpack?

  • Jetpack deploys OpenStack on baremetal in the easiest way possible using infrared on Red Hat labs.
  • There is an option to add support for external labs which we will discuss later in the blog.
  • Uses 100% ansible
  • You can find the source code on GitHub: https://github.com/redhat-performance/jetpack.git

Why we need Jetpack?

  • Deployment of Openstack is not straightforward and requires good knowledge of hardware and network setup to prepare the nic configs
  • Overwhelming for new users
  • Manually need to run commands for each and every step and wait for the previous command to be executed successfully

How we do it?

  • User can pass extra heat templates for customizable deploys
  • Fault tolerance
  • Handles boot order & Foreman Reprovisioning using ipmitool, badfish and hammercli
  • Handles both RHEL 7/8 undercloud and python2/3
  • Supports OVS and OVN backends
  • Works across OSP 10- OSP 16.1

Requirements:

  • Ansible >= 2.8
  • Python 3.6+
  • hammer host
  • Passwordless sudo for the user running the playbook on the Ansible control node

Structure

The main.yml triggers a set of playbooks sequentially depending on customization in group_vars/all.yml

bootstrap.yaml:

  • Installs the packages
  • Downloads the instackenv.json
  • Setup the infrared env

scale_compute_vms.yml:

For enabling hybrid deployment where undercloud, controllers will be baremetal, and computes will be VMs spawned on baremetal, enable scale_compute_vms in group_vars/all.yml.

  • This playbook allocates the first few nodes to controller based on controller_count requested by the user in instackenv.json and remaining nodes are used for spawning VM's for using them as computes
  • Infrared supports only Centos 7 for the hypervisor, so it installs it on the remaining nodes
  • Installs required packages on these nodes for spawning VMs
  • The first hypervisor is chosen to host vbmc ports for all the overcloud VMs. So the first hypervisor has to ssh to other hypervisors.
  • It uses 4_nets_multi_hypervisor.yml network topology which is customized for Scale Lab deployment. With this file, network interfaces in the created VMs can interact with physical interfaces on the controllers and undercloud. This customization is key in this deployment.
  • Spawns VMs on the hypervisor and updates the inventory

setup_undercloud.yml

  • Prepares undercloud for installation
  • Installs OS if needed depending on OSP release (uses badfish, foreman)

virtual_undercloud.yml

  • to deploy virtual undercloud while the overcloud is still baremetal
  • set virtual_uc: true in group_vars/all.yml

add_undercloud_to_inventory.yml

  • Adds undercloud to infrared inventory

prepare_nic_configs.yml

  • prepares nic-configs for overcloud deployment automatically based on a homogeneous set of machine type

composable_prepare_nic_configs.yml

composable_roles should be enabled in group_vars/all.yml if the instackenv.json contains non-homogeneous machine types.

  • prepares nic-configs for non-uniform machine types automatically, the second node in the instackenv.json is selected as the controller machine type and the remaining nodes are used as computes
  • For deploying OSP with composable roles the undercloud and controller machine type should be the same.

undercloud.yml

  • gets the local_interface to be used in undercloud.conf
  • Triggers the infrared tripleo-undercloud command for undercloud installation

intropsect.yml

  • introspects nodes for overcloud
  • deletes the nodes that fail introspection

tag.yml

  • Tags the nodes with appropriate flavors

external.yml

  • Creates a VLANed interface on the undercloud that is able to reach overcloud external endpoints

overcloud.yml

  • Network backend can be set to geneve/vxlan depending on ML2 backend and OSP release
  • we can specify extra_templates to customize the overcloud deployment
  • Deploys overcloud

post.yml

ocp_on_osp.yml

  • set shift_stack: true group_vars/all.yml
  • deploys ocp on osp using Installer Provisioned Infrastructure

browbeat.yml

  • Setup browbeat on undercloud

cleanup.yml

  • cleanup the directories created on ansible jump host

Support for New Lab

  • Jetpack can be used to deploy OSP on other labs as well, apart from Red Hat labs.
  • We need to set interfaces depending on the OS version in group_vars/all.yml
For example:  interfaces:    rhel8_interfaces: [eno1]    rhel7_interfaces: [em1]

For the external lab, one needs to ensure the correct os version is installed depending on the osp_release.

So in the next blog, we will discuss in detail about the features that Jetpack supports.

--

--

Asma Suhani S H
0 Followers

Associate Software Engineer at Red Hat, Openstack Performance and Scale Team