Difference between revisions of "OCP4-IPI-libvirt"

From p0f
Jump to: navigation, search
(add expectations)
(update expectations, add outcomes and prereqs)
Line 4: Line 4:
  
 
* You are familiar and comfortable with <code>libvirt</code> CLI and XML.
 
* You are familiar and comfortable with <code>libvirt</code> CLI and XML.
 +
* You are familiar and comfortable with <code>qemu-img</code> tool.
 
* You understand the different types of network interfaces on Linux and different <code>libvirt</code> networks.
 
* You understand the different types of network interfaces on Linux and different <code>libvirt</code> networks.
 
* You know how OpenShift installation works and what the difference between IPI and UPI is.
 
* You know how OpenShift installation works and what the difference between IPI and UPI is.
 
* You know about the OpenShift Machine API and various underlying mechanisms.
 
* You know about the OpenShift Machine API and various underlying mechanisms.
 +
 +
== Outcomes ==
 +
 +
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.
 +
 +
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''
  
 
= OpenShift Container Platform IPI Installation Using Libvirt =
 
= OpenShift Container Platform IPI Installation Using Libvirt =
  
 
== Prerequisites ==
 
== Prerequisites ==
 +
 +
Hardware requirements for the cluster:
 +
 +
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)
 +
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (may work with more, but will slow down the installation horribly and may ultimately fail)
 +
* one physical network interface that will be used for the public bridged network
 +
* a physical or virtual network interface that will be used for the provisioning network bridge
 +
 +
Hardware requirements for the installation client (provisioner) machine:
 +
 +
* a minimum of 8 GiB RAM and 4 CPUs
 +
* a network connection to both the public bridged network and the provisioning network
 +
 +
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.
 +
 +
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Host Configuration]] below.
 +
 +
Software artifacts needed on the provisioner host:
 +
 +
* <code>oc</code>, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/
 +
* <code>libvirt-client</code> package is required for <code>openshift-baremetal-install</code> to be able to communicate to hypervisors
 +
* <code>ipmitool</code> or some other IPMI client
 +
* a <code>pull-secret</code> file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/
 +
* an SSH keypair that can be used for accessing OpenShift nodes
  
 
== Host Configuration ==
 
== Host Configuration ==

Revision as of 11:36, 19 January 2024

Introduction

What I Assume

  • You are familiar and comfortable with libvirt CLI and XML.
  • You are familiar and comfortable with qemu-img tool.
  • You understand the different types of network interfaces on Linux and different libvirt networks.
  • You know how OpenShift installation works and what the difference between IPI and UPI is.
  • You know about the OpenShift Machine API and various underlying mechanisms.

Outcomes

The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.

At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.

OpenShift Container Platform IPI Installation Using Libvirt

Prerequisites

Hardware requirements for the cluster:

  • 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)
  • 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (may work with more, but will slow down the installation horribly and may ultimately fail)
  • one physical network interface that will be used for the public bridged network
  • a physical or virtual network interface that will be used for the provisioning network bridge

Hardware requirements for the installation client (provisioner) machine:

  • a minimum of 8 GiB RAM and 4 CPUs
  • a network connection to both the public bridged network and the provisioning network

Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.

In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in #Host Configuration below.

Software artifacts needed on the provisioner host:

  • oc, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/
  • libvirt-client package is required for openshift-baremetal-install to be able to communicate to hypervisors
  • ipmitool or some other IPMI client
  • a pull-secret file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/
  • an SSH keypair that can be used for accessing OpenShift nodes

Host Configuration

Virtual Machine Configuration

Installer Configuration

Installation

Post-Install Smoke Tests

Conclusion