<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://p0f.net/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gregab</id>
		<title>p0f - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://p0f.net/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gregab"/>
		<link rel="alternate" type="text/html" href="https://p0f.net/Special:Contributions/Gregab"/>
		<updated>2026-04-07T02:18:08Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=64</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=64"/>
				<updated>2024-02-05T10:31:16Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Creating Configuration Manually */ remove moaning about network interface matching, TBFO&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc ./openshift-client-linux.tar.gz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
$ rm -f ./openshift-baremetal-install&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - cidr: 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
With the above installation configuration file created, place a copy of it in a subdirectory, such as &amp;lt;code&amp;gt;./mycluster/&amp;lt;/code&amp;gt; and run the installer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir mycluster&lt;br /&gt;
$ cp install-config.yaml ./mycluster/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install --dir=./mycluster/ --log-level=debug create cluster&lt;br /&gt;
DEBUG OpenShift Installer 4.14.9&lt;br /&gt;
DEBUG Built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
DEBUG Fetching Metadata...&lt;br /&gt;
DEBUG Loading Metadata...&lt;br /&gt;
...&lt;br /&gt;
DEBUG   Loading Install Config...&lt;br /&gt;
DEBUG   Loading Bootstrap Ignition Config...&lt;br /&gt;
...&lt;br /&gt;
INFO Consuming Install Config from target directory&lt;br /&gt;
...&lt;br /&gt;
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.14-9.2/builds/414.92.202310210434-0/x86_64/rhcos-414.92.202310210434-0-qemu.x86_64.qcow2.gz?sha256=aab55f3ee088b88562f8fdcde5be78ace023e06fa01263e7cb9de2edc7131d6f'&lt;br /&gt;
...&lt;br /&gt;
INFO Creating infrastructure resources...&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you see the above message, check the hypervisor for the presence of the temporary bootstrap VM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ virsh list&lt;br /&gt;
 Id   Name                      State&lt;br /&gt;
---------------------------------------&lt;br /&gt;
 4    provisioner               running&lt;br /&gt;
 5    mycluster-tmkmv-bootstrap running&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see the VM running, you can inspect any containers on it by using the SSH key configured in install config to log into it and having a look around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -i ~/.ssh/id_rsa core@bootstrap.mycluster.example.com&lt;br /&gt;
The authenticity of host 'bootstrap.mycluster.example.com (172.25.3.9)' can't be established.&lt;br /&gt;
...&lt;br /&gt;
Red Hat Enterprise Linux CoreOS 414.92.202310210434-0&lt;br /&gt;
  Part of OpenShift 4.14, RHCOS is a Kubernetes native operating system&lt;br /&gt;
  managed by the Machine Config Operator (`clusteroperator/machine-config`).&lt;br /&gt;
&lt;br /&gt;
WARNING: Direct SSH access to machines is not recommended; instead,&lt;br /&gt;
make configuration changes via `machineconfig` objects:&lt;br /&gt;
...&lt;br /&gt;
[core@localhost ~]$ sudo -i&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active crio&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active bootkube&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# podman ps&lt;br /&gt;
CONTAINER ID  IMAGE                                                      COMMAND          CREATED             STATUS             PORTS       NAMES&lt;br /&gt;
da3b3e74fc7f  quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:...  /bin/rundnsmasq  About a minute ago  Up About a minute              dnsmasq&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl pods&lt;br /&gt;
POD ID              CREATED             STATE               NAME                                          NAMESPACE           ATTEMPT             RUNTIME&lt;br /&gt;
af9df933d3b91       49 seconds ago      Ready               etcd-bootstrap-member-localhost.localdomain   openshift-etcd      0                   (default)&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl ps&lt;br /&gt;
CONTAINER           IMAGE                                                     CREATED        STATE    NAME      ATTEMPT   POD ID         POD&lt;br /&gt;
9ad74f6433a33       753b0a16ba606f4c579690ed0035..                            6 seconds ago  Running  etcd      0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
5c9371b25e646       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:... 6 seconds ago  Running  etcdctl   0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see pods like &amp;lt;code&amp;gt;kube-system&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;openshift-kube-apiserver&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;openshift-cluster-version&amp;lt;/code&amp;gt; show up in ready state, you can leave the shell and use the generated &amp;lt;code&amp;gt;kubeconfig&amp;lt;/code&amp;gt; file to track the progress of the installation from the provisioner machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export KUBECONFIG=$(pwd)/cluster/auth/kubeconfig&lt;br /&gt;
&lt;br /&gt;
$ oc get clusterversion&lt;br /&gt;
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS&lt;br /&gt;
version             False       True          15m     Unable to apply 4.14.9: an unknown error has occurred: MultipleErrors&lt;br /&gt;
&lt;br /&gt;
$ oc get clusteroperators&lt;br /&gt;
$ oc get co&lt;br /&gt;
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE&lt;br /&gt;
authentication&lt;br /&gt;
baremetal&lt;br /&gt;
cloud-controller-manager&lt;br /&gt;
cloud-credential                                     True        False         False      15m&lt;br /&gt;
cluster-autoscaler&lt;br /&gt;
config-operator&lt;br /&gt;
console&lt;br /&gt;
control-plane-machine-set&lt;br /&gt;
csi-snapshot-controller&lt;br /&gt;
dns&lt;br /&gt;
etcd&lt;br /&gt;
image-registry&lt;br /&gt;
ingress&lt;br /&gt;
insights&lt;br /&gt;
kube-apiserver&lt;br /&gt;
kube-controller-manager&lt;br /&gt;
kube-scheduler&lt;br /&gt;
kube-storage-version-migrator&lt;br /&gt;
machine-api&lt;br /&gt;
machine-approver&lt;br /&gt;
machine-config&lt;br /&gt;
marketplace&lt;br /&gt;
monitoring&lt;br /&gt;
network&lt;br /&gt;
node-tuning&lt;br /&gt;
openshift-apiserver&lt;br /&gt;
openshift-controller-manager&lt;br /&gt;
openshift-samples&lt;br /&gt;
operator-lifecycle-manager&lt;br /&gt;
operator-lifecycle-manager-catalog&lt;br /&gt;
operator-lifecycle-manager-packageserver&lt;br /&gt;
service-ca&lt;br /&gt;
storage&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': It is normal for cluster version and various cluster operators to report transient error states as the progress of one impacts the progress of others. Eventually all these errors should go away.&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=63</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=63"/>
				<updated>2024-02-05T06:01:32Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Creating Configuration Manually */ refine netdev name complaint&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc ./openshift-client-linux.tar.gz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
$ rm -f ./openshift-baremetal-install&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - cidr: 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: ens4&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''''WARNING'''''&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Somewhere between RHCOS 4.12 and 4.14 the network devices stopped using the PCI naming scheme (in VMs only?). Consequently, &amp;lt;code&amp;gt;NetworkManager-wait-online&amp;lt;/code&amp;gt; will fail because it's looking for a device (&amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;) that isn't there (it's now called &amp;lt;code&amp;gt;ens4&amp;lt;/code&amp;gt;, go figure). Sure, it's got an ''altname'' of &amp;lt;code&amp;gt;enp0s4&amp;lt;/code&amp;gt; but that is incorrect and &amp;lt;code&amp;gt;NetworkManager&amp;lt;/code&amp;gt; don't care.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip ad sh ens4&lt;br /&gt;
3: ens4: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000&lt;br /&gt;
    link/ether 52:54:00:00:fb:13 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    altname enp0s4&lt;br /&gt;
    inet 10.1.1.13/24 brd 10.1.1.255 scope global noprefixroute ens4&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::3abb:b5:9c4f:c94d/64 scope link noprefixroute&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In turn, that will break most of the rest of the installation. Go figure. Just configure the interface to be &amp;lt;code&amp;gt;ens4&amp;lt;/code&amp;gt; in install config and move on.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
With the above installation configuration file created, place a copy of it in a subdirectory, such as &amp;lt;code&amp;gt;./mycluster/&amp;lt;/code&amp;gt; and run the installer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir mycluster&lt;br /&gt;
$ cp install-config.yaml ./mycluster/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install --dir=./mycluster/ --log-level=debug create cluster&lt;br /&gt;
DEBUG OpenShift Installer 4.14.9&lt;br /&gt;
DEBUG Built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
DEBUG Fetching Metadata...&lt;br /&gt;
DEBUG Loading Metadata...&lt;br /&gt;
...&lt;br /&gt;
DEBUG   Loading Install Config...&lt;br /&gt;
DEBUG   Loading Bootstrap Ignition Config...&lt;br /&gt;
...&lt;br /&gt;
INFO Consuming Install Config from target directory&lt;br /&gt;
...&lt;br /&gt;
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.14-9.2/builds/414.92.202310210434-0/x86_64/rhcos-414.92.202310210434-0-qemu.x86_64.qcow2.gz?sha256=aab55f3ee088b88562f8fdcde5be78ace023e06fa01263e7cb9de2edc7131d6f'&lt;br /&gt;
...&lt;br /&gt;
INFO Creating infrastructure resources...&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you see the above message, check the hypervisor for the presence of the temporary bootstrap VM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ virsh list&lt;br /&gt;
 Id   Name                      State&lt;br /&gt;
---------------------------------------&lt;br /&gt;
 4    provisioner               running&lt;br /&gt;
 5    mycluster-tmkmv-bootstrap running&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see the VM running, you can inspect any containers on it by using the SSH key configured in install config to log into it and having a look around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -i ~/.ssh/id_rsa core@bootstrap.mycluster.example.com&lt;br /&gt;
The authenticity of host 'bootstrap.mycluster.example.com (172.25.3.9)' can't be established.&lt;br /&gt;
...&lt;br /&gt;
Red Hat Enterprise Linux CoreOS 414.92.202310210434-0&lt;br /&gt;
  Part of OpenShift 4.14, RHCOS is a Kubernetes native operating system&lt;br /&gt;
  managed by the Machine Config Operator (`clusteroperator/machine-config`).&lt;br /&gt;
&lt;br /&gt;
WARNING: Direct SSH access to machines is not recommended; instead,&lt;br /&gt;
make configuration changes via `machineconfig` objects:&lt;br /&gt;
...&lt;br /&gt;
[core@localhost ~]$ sudo -i&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active crio&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active bootkube&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# podman ps&lt;br /&gt;
CONTAINER ID  IMAGE                                                      COMMAND          CREATED             STATUS             PORTS       NAMES&lt;br /&gt;
da3b3e74fc7f  quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:...  /bin/rundnsmasq  About a minute ago  Up About a minute              dnsmasq&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl pods&lt;br /&gt;
POD ID              CREATED             STATE               NAME                                          NAMESPACE           ATTEMPT             RUNTIME&lt;br /&gt;
af9df933d3b91       49 seconds ago      Ready               etcd-bootstrap-member-localhost.localdomain   openshift-etcd      0                   (default)&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl ps&lt;br /&gt;
CONTAINER           IMAGE                                                     CREATED        STATE    NAME      ATTEMPT   POD ID         POD&lt;br /&gt;
9ad74f6433a33       753b0a16ba606f4c579690ed0035..                            6 seconds ago  Running  etcd      0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
5c9371b25e646       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:... 6 seconds ago  Running  etcdctl   0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see pods like &amp;lt;code&amp;gt;kube-system&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;openshift-kube-apiserver&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;openshift-cluster-version&amp;lt;/code&amp;gt; show up in ready state, you can leave the shell and use the generated &amp;lt;code&amp;gt;kubeconfig&amp;lt;/code&amp;gt; file to track the progress of the installation from the provisioner machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export KUBECONFIG=$(pwd)/cluster/auth/kubeconfig&lt;br /&gt;
&lt;br /&gt;
$ oc get clusterversion&lt;br /&gt;
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS&lt;br /&gt;
version             False       True          15m     Unable to apply 4.14.9: an unknown error has occurred: MultipleErrors&lt;br /&gt;
&lt;br /&gt;
$ oc get clusteroperators&lt;br /&gt;
$ oc get co&lt;br /&gt;
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE&lt;br /&gt;
authentication&lt;br /&gt;
baremetal&lt;br /&gt;
cloud-controller-manager&lt;br /&gt;
cloud-credential                                     True        False         False      15m&lt;br /&gt;
cluster-autoscaler&lt;br /&gt;
config-operator&lt;br /&gt;
console&lt;br /&gt;
control-plane-machine-set&lt;br /&gt;
csi-snapshot-controller&lt;br /&gt;
dns&lt;br /&gt;
etcd&lt;br /&gt;
image-registry&lt;br /&gt;
ingress&lt;br /&gt;
insights&lt;br /&gt;
kube-apiserver&lt;br /&gt;
kube-controller-manager&lt;br /&gt;
kube-scheduler&lt;br /&gt;
kube-storage-version-migrator&lt;br /&gt;
machine-api&lt;br /&gt;
machine-approver&lt;br /&gt;
machine-config&lt;br /&gt;
marketplace&lt;br /&gt;
monitoring&lt;br /&gt;
network&lt;br /&gt;
node-tuning&lt;br /&gt;
openshift-apiserver&lt;br /&gt;
openshift-controller-manager&lt;br /&gt;
openshift-samples&lt;br /&gt;
operator-lifecycle-manager&lt;br /&gt;
operator-lifecycle-manager-catalog&lt;br /&gt;
operator-lifecycle-manager-packageserver&lt;br /&gt;
service-ca&lt;br /&gt;
storage&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': It is normal for cluster version and various cluster operators to report transient error states as the progress of one impacts the progress of others. Eventually all these errors should go away.&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=62</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=62"/>
				<updated>2024-02-02T20:49:30Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Creating Configuration Manually */ ens3 vs enp0s3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc ./openshift-client-linux.tar.gz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
$ rm -f ./openshift-baremetal-install&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - cidr: 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: ens3&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Somewhere between RHCOS 4.12 and 4.14 the network devices stopped using the PCI naming scheme (in VMs only?). Consequently, &amp;lt;code&amp;gt;NetworkManager-wait-online&amp;lt;/code&amp;gt; will fail because it's looking for a device (&amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;) that isn't there (it's now called &amp;lt;code&amp;gt;ens3&amp;lt;/code&amp;gt;). Sure, it's got an ''altname'' of &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt; but &amp;lt;code&amp;gt;NetworkManager&amp;lt;/code&amp;gt; don't care.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip ad sh enp0s3&lt;br /&gt;
2: ens3: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000&lt;br /&gt;
    link/ether 52:54:00:00:fa:13 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    altname enp0s3&lt;br /&gt;
    inet 172.25.3.13/24 brd 172.25.3.255 scope global dynamic noprefixroute ens3&lt;br /&gt;
       valid_lft 86083sec preferred_lft 86083sec&lt;br /&gt;
    inet6 fe80::a30:57dd:ee31:e42e/64 scope link noprefixroute&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In turn, that will break most of the rest of the installation. Go figure. Just call the interface &amp;lt;code&amp;gt;ens3&amp;lt;/code&amp;gt; and move on.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
With the above installation configuration file created, place a copy of it in a subdirectory, such as &amp;lt;code&amp;gt;./mycluster/&amp;lt;/code&amp;gt; and run the installer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir mycluster&lt;br /&gt;
$ cp install-config.yaml ./mycluster/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install --dir=./mycluster/ --log-level=debug create cluster&lt;br /&gt;
DEBUG OpenShift Installer 4.14.9&lt;br /&gt;
DEBUG Built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
DEBUG Fetching Metadata...&lt;br /&gt;
DEBUG Loading Metadata...&lt;br /&gt;
...&lt;br /&gt;
DEBUG   Loading Install Config...&lt;br /&gt;
DEBUG   Loading Bootstrap Ignition Config...&lt;br /&gt;
...&lt;br /&gt;
INFO Consuming Install Config from target directory&lt;br /&gt;
...&lt;br /&gt;
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.14-9.2/builds/414.92.202310210434-0/x86_64/rhcos-414.92.202310210434-0-qemu.x86_64.qcow2.gz?sha256=aab55f3ee088b88562f8fdcde5be78ace023e06fa01263e7cb9de2edc7131d6f'&lt;br /&gt;
...&lt;br /&gt;
INFO Creating infrastructure resources...&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you see the above message, check the hypervisor for the presence of the temporary bootstrap VM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ virsh list&lt;br /&gt;
 Id   Name                      State&lt;br /&gt;
---------------------------------------&lt;br /&gt;
 4    provisioner               running&lt;br /&gt;
 5    mycluster-tmkmv-bootstrap running&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see the VM running, you can inspect any containers on it by using the SSH key configured in install config to log into it and having a look around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -i ~/.ssh/id_rsa core@bootstrap.mycluster.example.com&lt;br /&gt;
The authenticity of host 'bootstrap.mycluster.example.com (172.25.3.9)' can't be established.&lt;br /&gt;
...&lt;br /&gt;
Red Hat Enterprise Linux CoreOS 414.92.202310210434-0&lt;br /&gt;
  Part of OpenShift 4.14, RHCOS is a Kubernetes native operating system&lt;br /&gt;
  managed by the Machine Config Operator (`clusteroperator/machine-config`).&lt;br /&gt;
&lt;br /&gt;
WARNING: Direct SSH access to machines is not recommended; instead,&lt;br /&gt;
make configuration changes via `machineconfig` objects:&lt;br /&gt;
...&lt;br /&gt;
[core@localhost ~]$ sudo -i&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active crio&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active bootkube&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# podman ps&lt;br /&gt;
CONTAINER ID  IMAGE                                                      COMMAND          CREATED             STATUS             PORTS       NAMES&lt;br /&gt;
da3b3e74fc7f  quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:...  /bin/rundnsmasq  About a minute ago  Up About a minute              dnsmasq&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl pods&lt;br /&gt;
POD ID              CREATED             STATE               NAME                                          NAMESPACE           ATTEMPT             RUNTIME&lt;br /&gt;
af9df933d3b91       49 seconds ago      Ready               etcd-bootstrap-member-localhost.localdomain   openshift-etcd      0                   (default)&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl ps&lt;br /&gt;
CONTAINER           IMAGE                                                     CREATED        STATE    NAME      ATTEMPT   POD ID         POD&lt;br /&gt;
9ad74f6433a33       753b0a16ba606f4c579690ed0035..                            6 seconds ago  Running  etcd      0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
5c9371b25e646       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:... 6 seconds ago  Running  etcdctl   0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see pods like &amp;lt;code&amp;gt;kube-system&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;openshift-kube-apiserver&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;openshift-cluster-version&amp;lt;/code&amp;gt; show up in ready state, you can leave the shell and use the generated &amp;lt;code&amp;gt;kubeconfig&amp;lt;/code&amp;gt; file to track the progress of the installation from the provisioner machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export KUBECONFIG=$(pwd)/cluster/auth/kubeconfig&lt;br /&gt;
&lt;br /&gt;
$ oc get clusterversion&lt;br /&gt;
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS&lt;br /&gt;
version             False       True          15m     Unable to apply 4.14.9: an unknown error has occurred: MultipleErrors&lt;br /&gt;
&lt;br /&gt;
$ oc get clusteroperators&lt;br /&gt;
$ oc get co&lt;br /&gt;
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE&lt;br /&gt;
authentication&lt;br /&gt;
baremetal&lt;br /&gt;
cloud-controller-manager&lt;br /&gt;
cloud-credential                                     True        False         False      15m&lt;br /&gt;
cluster-autoscaler&lt;br /&gt;
config-operator&lt;br /&gt;
console&lt;br /&gt;
control-plane-machine-set&lt;br /&gt;
csi-snapshot-controller&lt;br /&gt;
dns&lt;br /&gt;
etcd&lt;br /&gt;
image-registry&lt;br /&gt;
ingress&lt;br /&gt;
insights&lt;br /&gt;
kube-apiserver&lt;br /&gt;
kube-controller-manager&lt;br /&gt;
kube-scheduler&lt;br /&gt;
kube-storage-version-migrator&lt;br /&gt;
machine-api&lt;br /&gt;
machine-approver&lt;br /&gt;
machine-config&lt;br /&gt;
marketplace&lt;br /&gt;
monitoring&lt;br /&gt;
network&lt;br /&gt;
node-tuning&lt;br /&gt;
openshift-apiserver&lt;br /&gt;
openshift-controller-manager&lt;br /&gt;
openshift-samples&lt;br /&gt;
operator-lifecycle-manager&lt;br /&gt;
operator-lifecycle-manager-catalog&lt;br /&gt;
operator-lifecycle-manager-packageserver&lt;br /&gt;
service-ca&lt;br /&gt;
storage&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': It is normal for cluster version and various cluster operators to report transient error states as the progress of one impacts the progress of others. Eventually all these errors should go away.&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=61</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=61"/>
				<updated>2024-02-02T18:58:17Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Preparing the Software */ cleanup steps&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc ./openshift-client-linux.tar.gz&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
$ rm -f ./openshift-baremetal-install&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - cidr: 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: enp0s3&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
With the above installation configuration file created, place a copy of it in a subdirectory, such as &amp;lt;code&amp;gt;./mycluster/&amp;lt;/code&amp;gt; and run the installer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir mycluster&lt;br /&gt;
$ cp install-config.yaml ./mycluster/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install --dir=./mycluster/ --log-level=debug create cluster&lt;br /&gt;
DEBUG OpenShift Installer 4.14.9&lt;br /&gt;
DEBUG Built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
DEBUG Fetching Metadata...&lt;br /&gt;
DEBUG Loading Metadata...&lt;br /&gt;
...&lt;br /&gt;
DEBUG   Loading Install Config...&lt;br /&gt;
DEBUG   Loading Bootstrap Ignition Config...&lt;br /&gt;
...&lt;br /&gt;
INFO Consuming Install Config from target directory&lt;br /&gt;
...&lt;br /&gt;
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.14-9.2/builds/414.92.202310210434-0/x86_64/rhcos-414.92.202310210434-0-qemu.x86_64.qcow2.gz?sha256=aab55f3ee088b88562f8fdcde5be78ace023e06fa01263e7cb9de2edc7131d6f'&lt;br /&gt;
...&lt;br /&gt;
INFO Creating infrastructure resources...&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you see the above message, check the hypervisor for the presence of the temporary bootstrap VM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ virsh list&lt;br /&gt;
 Id   Name                      State&lt;br /&gt;
---------------------------------------&lt;br /&gt;
 4    provisioner               running&lt;br /&gt;
 5    mycluster-tmkmv-bootstrap running&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see the VM running, you can inspect any containers on it by using the SSH key configured in install config to log into it and having a look around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -i ~/.ssh/id_rsa core@bootstrap.mycluster.example.com&lt;br /&gt;
The authenticity of host 'bootstrap.mycluster.example.com (172.25.3.9)' can't be established.&lt;br /&gt;
...&lt;br /&gt;
Red Hat Enterprise Linux CoreOS 414.92.202310210434-0&lt;br /&gt;
  Part of OpenShift 4.14, RHCOS is a Kubernetes native operating system&lt;br /&gt;
  managed by the Machine Config Operator (`clusteroperator/machine-config`).&lt;br /&gt;
&lt;br /&gt;
WARNING: Direct SSH access to machines is not recommended; instead,&lt;br /&gt;
make configuration changes via `machineconfig` objects:&lt;br /&gt;
...&lt;br /&gt;
[core@localhost ~]$ sudo -i&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active crio&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active bootkube&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# podman ps&lt;br /&gt;
CONTAINER ID  IMAGE                                                      COMMAND          CREATED             STATUS             PORTS       NAMES&lt;br /&gt;
da3b3e74fc7f  quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:...  /bin/rundnsmasq  About a minute ago  Up About a minute              dnsmasq&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl pods&lt;br /&gt;
POD ID              CREATED             STATE               NAME                                          NAMESPACE           ATTEMPT             RUNTIME&lt;br /&gt;
af9df933d3b91       49 seconds ago      Ready               etcd-bootstrap-member-localhost.localdomain   openshift-etcd      0                   (default)&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl ps&lt;br /&gt;
CONTAINER           IMAGE                                                     CREATED        STATE    NAME      ATTEMPT   POD ID         POD&lt;br /&gt;
9ad74f6433a33       753b0a16ba606f4c579690ed0035..                            6 seconds ago  Running  etcd      0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
5c9371b25e646       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:... 6 seconds ago  Running  etcdctl   0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see pods like &amp;lt;code&amp;gt;kube-system&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;openshift-kube-apiserver&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;openshift-cluster-version&amp;lt;/code&amp;gt; show up in ready state, you can leave the shell and use the generated &amp;lt;code&amp;gt;kubeconfig&amp;lt;/code&amp;gt; file to track the progress of the installation from the provisioner machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export KUBECONFIG=$(pwd)/cluster/auth/kubeconfig&lt;br /&gt;
&lt;br /&gt;
$ oc get clusterversion&lt;br /&gt;
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS&lt;br /&gt;
version             False       True          15m     Unable to apply 4.14.9: an unknown error has occurred: MultipleErrors&lt;br /&gt;
&lt;br /&gt;
$ oc get clusteroperators&lt;br /&gt;
$ oc get co&lt;br /&gt;
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE&lt;br /&gt;
authentication&lt;br /&gt;
baremetal&lt;br /&gt;
cloud-controller-manager&lt;br /&gt;
cloud-credential                                     True        False         False      15m&lt;br /&gt;
cluster-autoscaler&lt;br /&gt;
config-operator&lt;br /&gt;
console&lt;br /&gt;
control-plane-machine-set&lt;br /&gt;
csi-snapshot-controller&lt;br /&gt;
dns&lt;br /&gt;
etcd&lt;br /&gt;
image-registry&lt;br /&gt;
ingress&lt;br /&gt;
insights&lt;br /&gt;
kube-apiserver&lt;br /&gt;
kube-controller-manager&lt;br /&gt;
kube-scheduler&lt;br /&gt;
kube-storage-version-migrator&lt;br /&gt;
machine-api&lt;br /&gt;
machine-approver&lt;br /&gt;
machine-config&lt;br /&gt;
marketplace&lt;br /&gt;
monitoring&lt;br /&gt;
network&lt;br /&gt;
node-tuning&lt;br /&gt;
openshift-apiserver&lt;br /&gt;
openshift-controller-manager&lt;br /&gt;
openshift-samples&lt;br /&gt;
operator-lifecycle-manager&lt;br /&gt;
operator-lifecycle-manager-catalog&lt;br /&gt;
operator-lifecycle-manager-packageserver&lt;br /&gt;
service-ca&lt;br /&gt;
storage&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': It is normal for cluster version and various cluster operators to report transient error states as the progress of one impacts the progress of others. Eventually all these errors should go away.&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=60</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=60"/>
				<updated>2024-01-26T12:49:46Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installation */ add debugging through API&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - cidr: 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: enp0s3&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
With the above installation configuration file created, place a copy of it in a subdirectory, such as &amp;lt;code&amp;gt;./mycluster/&amp;lt;/code&amp;gt; and run the installer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir mycluster&lt;br /&gt;
$ cp install-config.yaml ./mycluster/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install --dir=./mycluster/ --log-level=debug create cluster&lt;br /&gt;
DEBUG OpenShift Installer 4.14.9&lt;br /&gt;
DEBUG Built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
DEBUG Fetching Metadata...&lt;br /&gt;
DEBUG Loading Metadata...&lt;br /&gt;
...&lt;br /&gt;
DEBUG   Loading Install Config...&lt;br /&gt;
DEBUG   Loading Bootstrap Ignition Config...&lt;br /&gt;
...&lt;br /&gt;
INFO Consuming Install Config from target directory&lt;br /&gt;
...&lt;br /&gt;
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.14-9.2/builds/414.92.202310210434-0/x86_64/rhcos-414.92.202310210434-0-qemu.x86_64.qcow2.gz?sha256=aab55f3ee088b88562f8fdcde5be78ace023e06fa01263e7cb9de2edc7131d6f'&lt;br /&gt;
...&lt;br /&gt;
INFO Creating infrastructure resources...&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you see the above message, check the hypervisor for the presence of the temporary bootstrap VM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ virsh list&lt;br /&gt;
 Id   Name                      State&lt;br /&gt;
---------------------------------------&lt;br /&gt;
 4    provisioner               running&lt;br /&gt;
 5    mycluster-tmkmv-bootstrap running&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see the VM running, you can inspect any containers on it by using the SSH key configured in install config to log into it and having a look around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -i ~/.ssh/id_rsa core@bootstrap.mycluster.example.com&lt;br /&gt;
The authenticity of host 'bootstrap.mycluster.example.com (172.25.3.9)' can't be established.&lt;br /&gt;
...&lt;br /&gt;
Red Hat Enterprise Linux CoreOS 414.92.202310210434-0&lt;br /&gt;
  Part of OpenShift 4.14, RHCOS is a Kubernetes native operating system&lt;br /&gt;
  managed by the Machine Config Operator (`clusteroperator/machine-config`).&lt;br /&gt;
&lt;br /&gt;
WARNING: Direct SSH access to machines is not recommended; instead,&lt;br /&gt;
make configuration changes via `machineconfig` objects:&lt;br /&gt;
...&lt;br /&gt;
[core@localhost ~]$ sudo -i&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active crio&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active bootkube&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# podman ps&lt;br /&gt;
CONTAINER ID  IMAGE                                                      COMMAND          CREATED             STATUS             PORTS       NAMES&lt;br /&gt;
da3b3e74fc7f  quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:...  /bin/rundnsmasq  About a minute ago  Up About a minute              dnsmasq&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl pods&lt;br /&gt;
POD ID              CREATED             STATE               NAME                                          NAMESPACE           ATTEMPT             RUNTIME&lt;br /&gt;
af9df933d3b91       49 seconds ago      Ready               etcd-bootstrap-member-localhost.localdomain   openshift-etcd      0                   (default)&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl ps&lt;br /&gt;
CONTAINER           IMAGE                                                     CREATED        STATE    NAME      ATTEMPT   POD ID         POD&lt;br /&gt;
9ad74f6433a33       753b0a16ba606f4c579690ed0035..                            6 seconds ago  Running  etcd      0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
5c9371b25e646       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:... 6 seconds ago  Running  etcdctl   0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see pods like &amp;lt;code&amp;gt;kube-system&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;openshift-kube-apiserver&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;openshift-cluster-version&amp;lt;/code&amp;gt; show up in ready state, you can leave the shell and use the generated &amp;lt;code&amp;gt;kubeconfig&amp;lt;/code&amp;gt; file to track the progress of the installation from the provisioner machine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ export KUBECONFIG=$(pwd)/cluster/auth/kubeconfig&lt;br /&gt;
&lt;br /&gt;
$ oc get clusterversion&lt;br /&gt;
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS&lt;br /&gt;
version             False       True          15m     Unable to apply 4.14.9: an unknown error has occurred: MultipleErrors&lt;br /&gt;
&lt;br /&gt;
$ oc get clusteroperators&lt;br /&gt;
$ oc get co&lt;br /&gt;
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE&lt;br /&gt;
authentication&lt;br /&gt;
baremetal&lt;br /&gt;
cloud-controller-manager&lt;br /&gt;
cloud-credential                                     True        False         False      15m&lt;br /&gt;
cluster-autoscaler&lt;br /&gt;
config-operator&lt;br /&gt;
console&lt;br /&gt;
control-plane-machine-set&lt;br /&gt;
csi-snapshot-controller&lt;br /&gt;
dns&lt;br /&gt;
etcd&lt;br /&gt;
image-registry&lt;br /&gt;
ingress&lt;br /&gt;
insights&lt;br /&gt;
kube-apiserver&lt;br /&gt;
kube-controller-manager&lt;br /&gt;
kube-scheduler&lt;br /&gt;
kube-storage-version-migrator&lt;br /&gt;
machine-api&lt;br /&gt;
machine-approver&lt;br /&gt;
machine-config&lt;br /&gt;
marketplace&lt;br /&gt;
monitoring&lt;br /&gt;
network&lt;br /&gt;
node-tuning&lt;br /&gt;
openshift-apiserver&lt;br /&gt;
openshift-controller-manager&lt;br /&gt;
openshift-samples&lt;br /&gt;
operator-lifecycle-manager&lt;br /&gt;
operator-lifecycle-manager-catalog&lt;br /&gt;
operator-lifecycle-manager-packageserver&lt;br /&gt;
service-ca&lt;br /&gt;
storage&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': It is normal for cluster version and various cluster operators to report transient error states as the progress of one impacts the progress of others. Eventually all these errors should go away.&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=59</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=59"/>
				<updated>2024-01-26T12:21:41Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installation */ done up to bootstrap&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - cidr: 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: enp0s3&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
With the above installation configuration file created, place a copy of it in a subdirectory, such as &amp;lt;code&amp;gt;./mycluster/&amp;lt;/code&amp;gt; and run the installer.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir mycluster&lt;br /&gt;
$ cp install-config.yaml ./mycluster/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install --dir=./mycluster/ --log-level=debug create cluster&lt;br /&gt;
DEBUG OpenShift Installer 4.14.9&lt;br /&gt;
DEBUG Built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
DEBUG Fetching Metadata...&lt;br /&gt;
DEBUG Loading Metadata...&lt;br /&gt;
...&lt;br /&gt;
DEBUG   Loading Install Config...&lt;br /&gt;
DEBUG   Loading Bootstrap Ignition Config...&lt;br /&gt;
...&lt;br /&gt;
INFO Consuming Install Config from target directory&lt;br /&gt;
...&lt;br /&gt;
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.14-9.2/builds/414.92.202310210434-0/x86_64/rhcos-414.92.202310210434-0-qemu.x86_64.qcow2.gz?sha256=aab55f3ee088b88562f8fdcde5be78ace023e06fa01263e7cb9de2edc7131d6f'&lt;br /&gt;
...&lt;br /&gt;
INFO Creating infrastructure resources...&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you see the above message, check the hypervisor for the presence of the temporary bootstrap VM.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ virsh list&lt;br /&gt;
 Id   Name                      State&lt;br /&gt;
---------------------------------------&lt;br /&gt;
 4    provisioner               running&lt;br /&gt;
 5    mycluster-tmkmv-bootstrap running&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once you see the VM running, you can inspect any containers on it by using the SSH key configured in install config to log into it and having a look around.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh -i ~/.ssh/id_rsa core@bootstrap.mycluster.example.com&lt;br /&gt;
The authenticity of host 'bootstrap.mycluster.example.com (172.25.3.9)' can't be established.&lt;br /&gt;
...&lt;br /&gt;
Red Hat Enterprise Linux CoreOS 414.92.202310210434-0&lt;br /&gt;
  Part of OpenShift 4.14, RHCOS is a Kubernetes native operating system&lt;br /&gt;
  managed by the Machine Config Operator (`clusteroperator/machine-config`).&lt;br /&gt;
&lt;br /&gt;
WARNING: Direct SSH access to machines is not recommended; instead,&lt;br /&gt;
make configuration changes via `machineconfig` objects:&lt;br /&gt;
...&lt;br /&gt;
[core@localhost ~]$ sudo -i&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active crio&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# systemctl is-active bootkube&lt;br /&gt;
active&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# podman ps&lt;br /&gt;
CONTAINER ID  IMAGE                                                      COMMAND          CREATED             STATUS             PORTS       NAMES&lt;br /&gt;
da3b3e74fc7f  quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:...  /bin/rundnsmasq  About a minute ago  Up About a minute              dnsmasq&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl pods&lt;br /&gt;
POD ID              CREATED             STATE               NAME                                          NAMESPACE           ATTEMPT             RUNTIME&lt;br /&gt;
af9df933d3b91       49 seconds ago      Ready               etcd-bootstrap-member-localhost.localdomain   openshift-etcd      0                   (default)&lt;br /&gt;
&lt;br /&gt;
[root@localhost ~]# crictl ps&lt;br /&gt;
CONTAINER           IMAGE                                                     CREATED        STATE    NAME      ATTEMPT   POD ID         POD&lt;br /&gt;
9ad74f6433a33       753b0a16ba606f4c579690ed0035..                            6 seconds ago  Running  etcd      0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
5c9371b25e646       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:... 6 seconds ago  Running  etcdctl   0         af9df933d3b91  etcd-bootstrap-member-localhost.localdomain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=58</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=58"/>
				<updated>2024-01-26T12:09:18Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Creating Configuration Manually */ fix typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - cidr: 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: enp0s3&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=57</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=57"/>
				<updated>2024-01-26T12:07:23Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Creating Configuration Manually */ switch from deprecated fields&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  # These are the networks external IPs will be allocated from.&lt;br /&gt;
  machineNetwork:&lt;br /&gt;
    - 172.25.3.0/24&lt;br /&gt;
  # This is the pod network.&lt;br /&gt;
  clusterNetworks:&lt;br /&gt;
    - cidr: 10.200.0.0/14&lt;br /&gt;
      hostPrefix: 23&lt;br /&gt;
  # Only one entry is supported.&lt;br /&gt;
  serviceNetwork:&lt;br /&gt;
    - 172.30.0.0/16&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: enp0s3&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=56</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=56"/>
				<updated>2024-01-26T11:01:20Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Creating Configuration Manually */ added install-config.yaml&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
The installer configuration file consists of a number of mandatory sections:&lt;br /&gt;
&lt;br /&gt;
* cluster domain and name&lt;br /&gt;
* network settings (such as machine and cluster IP ranges)&lt;br /&gt;
* control plane and compute node settings&lt;br /&gt;
* infrastructure platform settings&lt;br /&gt;
* pull secret and ssh key&lt;br /&gt;
&lt;br /&gt;
There are also some optional sections which are useful in special cases such as disconnected installation or special install modes.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install explain installconfig&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  InstallConfig is the configuration for an OpenShift install.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    additionalTrustBundle &amp;lt;string&amp;gt;&lt;br /&gt;
      AdditionalTrustBundle is a PEM-encoded X.509 certificate bundle that will be added to the nodes' trusted certificate store.&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install explain installconfig.platform.baremetal&lt;br /&gt;
KIND:     InstallConfig&lt;br /&gt;
VERSION:  v1&lt;br /&gt;
&lt;br /&gt;
RESOURCE: &amp;lt;object&amp;gt;&lt;br /&gt;
  BareMetal is the configuration used when installing on bare metal.&lt;br /&gt;
&lt;br /&gt;
FIELDS:&lt;br /&gt;
    apiVIP &amp;lt;string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      DeprecatedAPIVIP is the VIP to use for internal API communication Deprecated: Use APIVIPs&lt;br /&gt;
&lt;br /&gt;
    apiVIPs &amp;lt;[]string&amp;gt;&lt;br /&gt;
      Format: ip&lt;br /&gt;
      APIVIPs contains the VIP(s) to use for internal API communication. In dual stack clusters it contains an IPv4 and IPv6 address, otherwise only one VIP&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An example &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; for baremetal IPI looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
baseDomain: example.com&lt;br /&gt;
metadata:&lt;br /&gt;
  name: mycluster&lt;br /&gt;
networking:&lt;br /&gt;
  machineCIDR: 172.25.3.0/24&lt;br /&gt;
  networkType: OVNKubernetes&lt;br /&gt;
  clusterNetwork:&lt;br /&gt;
  - cidr: 10.200.0.0/14&lt;br /&gt;
    hostPrefix: 23&lt;br /&gt;
compute:&lt;br /&gt;
- name: worker&lt;br /&gt;
  replicas: 2&lt;br /&gt;
controlPlane:&lt;br /&gt;
  name: master&lt;br /&gt;
  replicas: 3&lt;br /&gt;
  platform:&lt;br /&gt;
    baremetal: {}&lt;br /&gt;
platform:&lt;br /&gt;
  baremetal:&lt;br /&gt;
    apiVIPs:&lt;br /&gt;
      - 172.25.3.10&lt;br /&gt;
    ingressVIPs:&lt;br /&gt;
      - 172.25.3.11&lt;br /&gt;
    provisioningNetwork: Managed&lt;br /&gt;
    provisioningNetworkCIDR: 10.1.1.0/24&lt;br /&gt;
    provisioningDHCPRange: 10.1.1.200,10.1.1.210&lt;br /&gt;
    # These settings are to configure the temporary bootstrap node as a VM&lt;br /&gt;
    externalBridge: bridge0&lt;br /&gt;
    externalMACAddress: '52:54:00:00:fa:0f'&lt;br /&gt;
    bootstrapProvisioningIP: 10.1.1.9&lt;br /&gt;
    provisioningBridge: provbr0&lt;br /&gt;
    provisioningNetworkInterface: enp0s3&lt;br /&gt;
    provisioningMACAddress: '52:54:00:00:fb:0f'&lt;br /&gt;
    # This needs to be done to avoid nested virtualisation if provisioner is a VM.&lt;br /&gt;
    libvirtURI: qemu+ssh://root@hypervisor.example.com/system&lt;br /&gt;
    hosts:&lt;br /&gt;
      - name: controlplane1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6211&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        # This is the provisioning network interface.&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:11&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        # We need this for proper targetting of root device (vda).&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6212&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:12&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: controlplane3&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6213&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:13&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker1&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6221&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:21&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
      - name: worker2&lt;br /&gt;
        role: master&lt;br /&gt;
        bmc:&lt;br /&gt;
          address: ipmi://hypervisor.example.com:6222&lt;br /&gt;
          disableCertificateVerification: true&lt;br /&gt;
          username: admin&lt;br /&gt;
          password: password&lt;br /&gt;
        bootMACAddress: 52:54:00:00:fb:22&lt;br /&gt;
        bootMode: legacy&lt;br /&gt;
        hardwareProfile: libvirt&lt;br /&gt;
pullSecret: '{&amp;quot;auths&amp;quot;:{&amp;quot;cloud.openshift.com&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},&amp;quot;quay.io&amp;quot;:{&amp;quot;auth&amp;quot;:&amp;quot;...&amp;quot;,&amp;quot;email&amp;quot;:&amp;quot;...&amp;quot;},...}}'&lt;br /&gt;
sshKey: &amp;quot;ssh-rsa ...&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=55</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=55"/>
				<updated>2024-01-26T10:48:17Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installer Configuration */ fix hostnames, IPs, and macs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see above, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name controlplane1&lt;br /&gt;
? BMC Address ipmi://192.168.1.1:6211&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:11&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain example.com&lt;br /&gt;
? Cluster Name mycluster&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.3.10&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.3.11&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=54</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=54"/>
				<updated>2024-01-26T10:19:25Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Creating Installer Configuration */ split into two sections, add reqs for interactive&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
==== Using Interactive Mode ====&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': In interactive mode, machine network (the network segment the nodes have their IPs allocated from) is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to install is to create &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': Interactive installer expects three control plane nodes and three workers. It will fail if you add less than three of each type of nodes to the cluster.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name 3node-node0&lt;br /&gt;
? BMC Address ipmi://172.25.35.2:6220&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:10&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain p0f.local&lt;br /&gt;
? Cluster Name 3node&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If your VIPs are not on the default machine network, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.35.14&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.35.15&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': If you added fewer than three control plane nodes, or fewer than three workers, the installer will fail at this point.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [...]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Creating Configuration Manually ====&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=53</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=53"/>
				<updated>2024-01-26T10:12:48Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installer Configuration */ add interactive config&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
=== Creating Installer Configuration ===&lt;br /&gt;
&lt;br /&gt;
Installer configuration file can be created interactively, using &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt;, or you can simply write it out with an editor.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': There is an &amp;lt;code&amp;gt;explain&amp;lt;/code&amp;gt; subcommand in the installer binary that explains the structure of the &amp;lt;code&amp;gt;installconfig&amp;lt;/code&amp;gt; resource.&lt;br /&gt;
&lt;br /&gt;
The interactive mode will ask you a series of questions and generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; at the end, but there are very few customisation options using this method.&lt;br /&gt;
&lt;br /&gt;
Initially, you must select the SSH public key to be published on cluster nodes, and specify some general settings:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ openshift-baremetal-install --dir=cluster create install-config&lt;br /&gt;
? SSH Public Key /home/provisioner/.ssh/id_rsa.pub&lt;br /&gt;
? Platform baremetal&lt;br /&gt;
? Provisioning Network Managed&lt;br /&gt;
? Provisioning Network CIDR 172.22.0.0/24&lt;br /&gt;
? Provisioning bridge provbr0&lt;br /&gt;
? Provisioning Network Interface enp0s3&lt;br /&gt;
? External bridge bridge0&lt;br /&gt;
? Add a Host:  [Use arrows to move, type to filter]&lt;br /&gt;
&amp;gt; control plane&lt;br /&gt;
  worker&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, after those have been answered, you may add any number of nodes, either control plane or worker.&lt;br /&gt;
&lt;br /&gt;
For both type of nodes, the questions are the same:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add a Host: control plane&lt;br /&gt;
? Name 3node-node0&lt;br /&gt;
? BMC Address ipmi://172.25.35.2:6220&lt;br /&gt;
? BMC Username admin&lt;br /&gt;
? BMC Password ********&lt;br /&gt;
? Boot MAC Address 52:54:00:00:fb:10&lt;br /&gt;
? Add another host? (y/N) y&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After all the hosts have been added, there is a final series of cluster-related questions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
? Add another host? No&lt;br /&gt;
? Base Domain p0f.local&lt;br /&gt;
? Cluster Name 3node&lt;br /&gt;
? Pull Secret [? for help] ****************************...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point, some checks are performed against DNS server to see if &amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; resolves, and if the IP is a part of the machine network. The same check is performed for the ingress VIP.&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': By default, and using the interactive mode, machine network is expected to be &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;. If your VIPs are not on that network, the installer will fail at this point. If your machine network is not &amp;lt;code&amp;gt;10.0.0.0/16&amp;lt;/code&amp;gt;, the only way to proceed is to generate &amp;lt;code&amp;gt;install-config.yaml&amp;lt;/code&amp;gt; manually.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
FATAL failed to fetch Install Config: failed to generate asset &amp;quot;Install Config&amp;quot;: invalid install config: [platform.baremetal.apiVIPs: Invalid value: &amp;quot;172.25.35.14&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16, platform.baremetal.ingressVIPs: Invalid value: &amp;quot;172.25.35.15&amp;quot;: IP expected to be in one of the machine networks: 10.0.0.0/16]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=52</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=52"/>
				<updated>2024-01-26T09:11:43Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Preparing the Software */ add pull-secret step&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
First, make sure your &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file is on the provisioner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -l pull-secret&lt;br /&gt;
-rw-r-----. 1 provisioner provisioner 2734 Oct 27 12:21 pull-secret&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then after downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; (if you didn't do that already)...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''NOTE'': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=51</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=51"/>
				<updated>2024-01-26T09:01:19Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installer Configuration */ added extract tools&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
=== Preparing the Software ===&lt;br /&gt;
&lt;br /&gt;
After downloading &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -OL https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/openshift-client-linux.tar.gz&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
100 60.9M  100 60.9M    0     0  4287k      0  0:00:14  0:00:14 --:--:-- 5642k&lt;br /&gt;
&lt;br /&gt;
$ tar xf openshift-client-linux.tar.gz oc&lt;br /&gt;
&lt;br /&gt;
$ ./oc version&lt;br /&gt;
Client Version: 4.14.9&lt;br /&gt;
Kustomize Version: v5.0.1&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./oc /usr/local/bin/oc&lt;br /&gt;
$ rm -f ./oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...and optionally generating a bash completion file...&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ oc completion bash &amp;gt; oc.completion&lt;br /&gt;
$ sudo cp oc.completion /etc/bash_completion.d/oc&lt;br /&gt;
$ rm -f oc.completion&lt;br /&gt;
$ source /etc/bash_completion.d/oc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
...you can use it to extract &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; from the release image you intend to use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.14/release.txt | grep &amp;quot;Pull From:&amp;quot;&lt;br /&gt;
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ oc adm release extract --registry-config=pull-secret --command=openshift-baremetal-install --to=. \&lt;br /&gt;
    quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
&lt;br /&gt;
$ ./openshift-baremetal-install version&lt;br /&gt;
./openshift-baremetal-install 4.14.9&lt;br /&gt;
built from commit dfafb5ca972a6ed4677257aebfe4f284ac020830&lt;br /&gt;
release image quay.io/openshift-release-dev/ocp-release@sha256:f5eaf0248779a0478cfd83f055d56dc7d755937800a68ad55f6047c503977c44&lt;br /&gt;
release architecture amd64&lt;br /&gt;
&lt;br /&gt;
$ sudo cp ./openshift-baremetal-install /usr/local/bin/&lt;br /&gt;
&lt;br /&gt;
$ openshift-baremetal-install completion bash &amp;gt; oinst.completion&lt;br /&gt;
$ sudo cp oinst.completion /etc/bash_completion.d/oinst&lt;br /&gt;
$ rm -f oinst.completion&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''NOTE'': The extraction process can take up to 5 minutes, depending on your network speed and Quay.io responsiveness.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=50</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=50"/>
				<updated>2024-01-26T07:58:54Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installer Configuration */ add node name&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* node name&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=49</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=49"/>
				<updated>2024-01-26T07:25:56Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installer Configuration */ add dns details&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP (&amp;lt;code&amp;gt;api.mycluster.example.com&amp;lt;/code&amp;gt; should point to it)&lt;br /&gt;
** ingress load balancer VIP (any host within &amp;lt;code&amp;gt;apps.mycluster.example.com&amp;lt;/code&amp;gt; should resolve to it, usually via a wildcard record)&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=48</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=48"/>
				<updated>2024-01-26T07:18:39Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Prerequisites */ add DHCP requirement&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
'''IMPORTANT''': The external IP addresses of cluster nodes must be assigned by your infrastructure DHCP server.&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP&lt;br /&gt;
** ingress load balancer VIP&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=47</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=47"/>
				<updated>2024-01-26T07:16:16Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Virtual Machine Configuration */ reorg&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== VM Definitions ===&lt;br /&gt;
&lt;br /&gt;
The virtual machines need to be configured with sufficient amount of compute resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
This section ties into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for external connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
=== IPMI BMC ===&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP&lt;br /&gt;
** ingress load balancer VIP&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=46</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=46"/>
				<updated>2024-01-26T07:13:38Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installer Configuration */ enumerated bits needed for install&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
This section will tie into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The virtual machines also need to be configured with sufficient resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for public connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be an isolated virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
=== Gathering the Bits Together ===&lt;br /&gt;
&lt;br /&gt;
First step is to make sure the following artifacts are available:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt;&lt;br /&gt;
* an SSH public key for compute node access&lt;br /&gt;
&lt;br /&gt;
The global cluster network settings that we will need to configure are:&lt;br /&gt;
&lt;br /&gt;
* the parent DNS domain of the cluster (such as &amp;lt;code&amp;gt;example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the name of the cluster (concatenated with DNS domain, for example &amp;lt;code&amp;gt;mycluster&amp;lt;/code&amp;gt; will become &amp;lt;code&amp;gt;mycluster.example.com&amp;lt;/code&amp;gt;)&lt;br /&gt;
* the provisioning network CIDR (in our case, it can be any IP address block not overlapping with the external network as the provisioning network is isolated)&lt;br /&gt;
* from the external network address space, a designated:&lt;br /&gt;
** API server VIP&lt;br /&gt;
** ingress load balancer VIP&lt;br /&gt;
&lt;br /&gt;
Additionally, you will need, for each node its:&lt;br /&gt;
&lt;br /&gt;
* provisioning interface MAC address&lt;br /&gt;
* IPMI BMC address and port&lt;br /&gt;
* IPMI BMC credentials&lt;br /&gt;
* the name of the provisioning interface as seen from within the VM (re [[#Virtual Machine Configuration]] above - since the PCI address of the interface is ''bus 0x0 slot 0x3'' it will be named &amp;lt;code&amp;gt;enp0s3&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
As already said, DHCP address assignment to external interfaces is not managed by the installer. It must be handled by your infrastructure.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=45</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=45"/>
				<updated>2024-01-25T18:47:07Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Installer Configuration */ intro&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
This section will tie into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The virtual machines also need to be configured with sufficient resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for public connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
This is where it all comes together. You need to execute the steps in this section (and next) on the ''provisioner'' host, that is, a system that has access to both external and provisioning networks of the OpenShift cluster-to-be. The access need not be direct, it can be routed, but if you, as in our example, configured the provisioning network to be on a virtual bridge, you will be best off by creating an additional VM that is directly connected to both bridges.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=44</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=44"/>
				<updated>2024-01-25T13:28:55Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Virtual Machine Configuration */ added vbmc notes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
This section will tie into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The virtual machines also need to be configured with sufficient resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for public connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
What is important for OCP IPI to be able to perform the installation properly is to register each virtual machine with &amp;lt;code&amp;gt;vbmcd&amp;lt;/code&amp;gt; and assign it with a port.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc add --port=6211 controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | down    | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ vbmc start controlplane1&lt;br /&gt;
$ vbmc list&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| Domain name       | Status  | Address | Port |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
| controlplane1     | running | ::      | 6211 |&lt;br /&gt;
+-------------------+---------+---------+------+&lt;br /&gt;
$ sudo ss -aunp | grep 6211&lt;br /&gt;
UNCONN 0      0                        *:6211             *:*    users:((&amp;quot;vbmcd&amp;quot;,pid=766290,fd=21))&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is no need to run the &amp;lt;code&amp;gt;vbmc&amp;lt;/code&amp;gt; client as root as it is the daemon that is running as root and can see all the VMs accessible through the &amp;lt;code&amp;gt;qemu:///system&amp;lt;/code&amp;gt; URL.&lt;br /&gt;
&lt;br /&gt;
Of course there are options. For any VM you add, you can specify a custom libvirt URL and credentials if necessary (options &amp;lt;code&amp;gt;--libvirt-uri&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;--libvirt-sasl-username&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;--libvirt-sasl-password&amp;lt;/code&amp;gt;), a custom set of IPMI admin credentials (options &amp;lt;code&amp;gt;--username&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;--password&amp;lt;/code&amp;gt;, they default to &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;password&amp;lt;/code&amp;gt;), and a custom IP address to listen on (defaults to all addresses, use option &amp;lt;code&amp;gt;--address&amp;lt;/code&amp;gt; to restrict it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vbmc show controlplane1&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| Property              | Value             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
| active                | True              |&lt;br /&gt;
| address               | ::                |&lt;br /&gt;
| domain_name           | controlplane1     |&lt;br /&gt;
| libvirt_sasl_password | ***               |&lt;br /&gt;
| libvirt_sasl_username | None              |&lt;br /&gt;
| libvirt_uri           | qemu:///system    |&lt;br /&gt;
| password              | ***               |&lt;br /&gt;
| port                  | 6211              |&lt;br /&gt;
| status                | running           |&lt;br /&gt;
| username              | admin             |&lt;br /&gt;
+-----------------------+-------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once started (which just opens the port) you can test the BMC connection using &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or similar.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ipmitool -I lanplus -H localhost -p 6211 -U admin -P password chassis status&lt;br /&gt;
System Power         : off&lt;br /&gt;
Power Overload       : false&lt;br /&gt;
Power Interlock      : inactive&lt;br /&gt;
Main Power Fault     : false&lt;br /&gt;
Power Control Fault  : false&lt;br /&gt;
Power Restore Policy : always-off&lt;br /&gt;
Last Power Event     :&lt;br /&gt;
Chassis Intrusion    : inactive&lt;br /&gt;
Front-Panel Lockout  : inactive&lt;br /&gt;
Drive Fault          : false&lt;br /&gt;
Cooling/Fan Fault    : false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That's it! We're ready to install OCP!&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=43</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=43"/>
				<updated>2024-01-25T13:12:34Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Virtual Machine Configuration */ small bits&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
This section will tie into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The virtual machines also need to be configured with sufficient resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (for public connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
* the disk image needs to be 64GiB in size at the minimum, but you can make it larger and/or add more disk images if you intend to use the local storage operator&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=42</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=42"/>
				<updated>2024-01-25T10:59:58Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Virtual Machine Configuration */ libvirt xml and notes added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
This section will tie into the [[#Network Settings]] section above. You need two bridges on your hypervisor(s), &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The virtual machines also need to be configured with sufficient resources, as per [[#Prerequisites]] above.&lt;br /&gt;
&lt;br /&gt;
An example control plane node definition in &amp;lt;code&amp;gt;libvirt XML&amp;lt;/code&amp;gt; would look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;domain type='kvm'&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;controlplane1&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;memory unit='GiB'&amp;gt;32&amp;lt;/memory&amp;gt;&lt;br /&gt;
  &amp;lt;currentMemory unit='GiB'&amp;gt;32&amp;lt;/currentMemory&amp;gt;&lt;br /&gt;
  &amp;lt;vcpu placement='static'&amp;gt;12&amp;lt;/vcpu&amp;gt;&lt;br /&gt;
  &amp;lt;os&amp;gt;&lt;br /&gt;
    &amp;lt;type arch='x86_64' machine='q35'&amp;gt;hvm&amp;lt;/type&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='hd'/&amp;gt;&lt;br /&gt;
    &amp;lt;boot dev='network'/&amp;gt;&lt;br /&gt;
    &amp;lt;bootmenu enable='yes'/&amp;gt;&lt;br /&gt;
  &amp;lt;/os&amp;gt;&lt;br /&gt;
  &amp;lt;features&amp;gt;&lt;br /&gt;
    &amp;lt;acpi/&amp;gt;&lt;br /&gt;
    &amp;lt;apic/&amp;gt;&lt;br /&gt;
  &amp;lt;/features&amp;gt;&lt;br /&gt;
  &amp;lt;cpu mode='host-model' check='partial'&amp;gt;&lt;br /&gt;
    &amp;lt;model fallback='allow'/&amp;gt;&lt;br /&gt;
  &amp;lt;/cpu&amp;gt;&lt;br /&gt;
  &amp;lt;clock offset='utc'&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='rtc' tickpolicy='catchup'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='pit' tickpolicy='delay'/&amp;gt;&lt;br /&gt;
    &amp;lt;timer name='hpet' present='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/clock&amp;gt;&lt;br /&gt;
  &amp;lt;on_poweroff&amp;gt;destroy&amp;lt;/on_poweroff&amp;gt;&lt;br /&gt;
  &amp;lt;on_reboot&amp;gt;restart&amp;lt;/on_reboot&amp;gt;&lt;br /&gt;
  &amp;lt;on_crash&amp;gt;destroy&amp;lt;/on_crash&amp;gt;&lt;br /&gt;
  &amp;lt;pm&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-mem enabled='no'/&amp;gt;&lt;br /&gt;
    &amp;lt;suspend-to-disk enabled='no'/&amp;gt;&lt;br /&gt;
  &amp;lt;/pm&amp;gt;&lt;br /&gt;
  &amp;lt;devices&amp;gt;&lt;br /&gt;
    &amp;lt;emulator&amp;gt;/usr/libexec/qemu-kvm&amp;lt;/emulator&amp;gt;&lt;br /&gt;
    &amp;lt;disk type='file' device='disk'&amp;gt;&lt;br /&gt;
      &amp;lt;driver name='qemu' type='qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;source file='/var/lib/libvirt/images/controlplane1-vda.qcow2'/&amp;gt;&lt;br /&gt;
      &amp;lt;target dev='vda' bus='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/disk&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='usb' index='0' model='qemu-xhci' ports='15'/&amp;gt;&lt;br /&gt;
    &amp;lt;controller type='pci' index='0' model='pcie-root'/&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fb:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='provbr0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;interface type='bridge'&amp;gt;&lt;br /&gt;
      &amp;lt;mac address='52:54:00:00:fa:11'/&amp;gt;&lt;br /&gt;
      &amp;lt;source bridge='bridge0'/&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
      &amp;lt;address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/interface&amp;gt;&lt;br /&gt;
    &amp;lt;console type='pty'/&amp;gt;&lt;br /&gt;
    &amp;lt;channel type='unix'&amp;gt;&lt;br /&gt;
      &amp;lt;source mode='bind'/&amp;gt;&lt;br /&gt;
      &amp;lt;target type='virtio' name='org.qemu.guest_agent.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;/channel&amp;gt;&lt;br /&gt;
    &amp;lt;input type='tablet' bus='usb'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='mouse' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;input type='keyboard' bus='ps2'/&amp;gt;&lt;br /&gt;
    &amp;lt;graphics type='vnc' autoport='yes' listen='0.0.0.0'/&amp;gt;&lt;br /&gt;
    &amp;lt;video&amp;gt;&lt;br /&gt;
      &amp;lt;model type='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;/video&amp;gt;&lt;br /&gt;
    &amp;lt;memballoon model='virtio'/&amp;gt;&lt;br /&gt;
    &amp;lt;rng model='virtio'&amp;gt;&lt;br /&gt;
      &amp;lt;backend model='random'&amp;gt;/dev/urandom&amp;lt;/backend&amp;gt;&lt;br /&gt;
    &amp;lt;/rng&amp;gt;&lt;br /&gt;
  &amp;lt;/devices&amp;gt;&lt;br /&gt;
&amp;lt;/domain&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A couple of things to note:&lt;br /&gt;
&lt;br /&gt;
* the first network interface is attached to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, and its PCI address is ''lower'' (0x03), causing it to be the PXE default device&lt;br /&gt;
* the second network interface is attached to &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, and its PCI address is ''higher'' (0x09), making it the second interface (public connections)&lt;br /&gt;
* boot order is set to hard disk first, network second, which means the host will only PXE boot if the disk image is unbootable&lt;br /&gt;
&lt;br /&gt;
When configuring other nodes, simply remember to change the node name, disk image name, and MAC addresses to be unique. Adjust hardware resources accordingly for compute nodes.&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=41</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=41"/>
				<updated>2024-01-25T10:49:50Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Prerequisites */ fix link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Installation Spanning Multiple Hypervisors]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=40</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=40"/>
				<updated>2024-01-23T18:56:50Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: finish vbmcd section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Network Settings]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In most Python environments you can install it using &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt;, just make sure &amp;lt;code&amp;gt;pip3&amp;lt;/code&amp;gt; is up-to-date first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ pip3 install --upgrade pip&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
$ pip3 install virtualbmc&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives you &amp;lt;code&amp;gt;/usr/local/bin/vbmcd&amp;lt;/code&amp;gt; which you can control using the following systemd unit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Unit]&lt;br /&gt;
Description=vbmcd&lt;br /&gt;
[Service]&lt;br /&gt;
Type=forking&lt;br /&gt;
ExecStart=/usr/local/bin/vbmcd&lt;br /&gt;
[Install]&lt;br /&gt;
WantedBy=multi-user.target&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Put the above content into &amp;lt;code&amp;gt;/etc/systemd/system/vbmcd.service&amp;lt;/code&amp;gt;, reload systemd, and enable/start the service.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo systemctl daemon-reload&lt;br /&gt;
$ sudo systemctl enable --now vbmcd&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You now have the ability to associate a TCP port with a virtual machine defined on the hypervisor host, and have it simulate an IPMI BMC for that VM!&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=39</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=39"/>
				<updated>2024-01-19T22:09:30Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: down to vbmc&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* We are not talking about any firewall restrictions here - it is your responsibility to ensure traffic is not blocked.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Network Settings]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; Settings ===&lt;br /&gt;
&lt;br /&gt;
Your &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; will of course need to know about those network bridges in order to be able to attach VMs to them.&lt;br /&gt;
&lt;br /&gt;
For that, you will need two network definitions, looking a bit like the following XML. Make sure they are autostart for least headache.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh net-dumpxml external&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;external&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='bridge0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$ sudo virsh net-dumpxml provisioning&lt;br /&gt;
&amp;lt;network&amp;gt;&lt;br /&gt;
  &amp;lt;name&amp;gt;provisioning&amp;lt;/name&amp;gt;&lt;br /&gt;
  &amp;lt;uuid&amp;gt;whatever&amp;lt;/uuid&amp;gt;&lt;br /&gt;
  &amp;lt;forward mode='bridge'/&amp;gt;&lt;br /&gt;
  &amp;lt;bridge name='provbr0'/&amp;gt;&lt;br /&gt;
&amp;lt;/network&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, you want to ensure that the storage pool is big enough, but that is not directly related to the subject at hand.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sudo virsh pool-info default&lt;br /&gt;
Name:           default&lt;br /&gt;
UUID:           whatever&lt;br /&gt;
State:          running&lt;br /&gt;
Persistent:     yes&lt;br /&gt;
Autostart:      yes&lt;br /&gt;
Capacity:       250.92 GiB&lt;br /&gt;
Allocation:     0 GiB&lt;br /&gt;
Available:      250.92 GiB&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== VirtualBMC ===&lt;br /&gt;
&lt;br /&gt;
The most important part of IPI facilitation is to be able to simulate a baseboard management controller for your VMs. &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; obviously doesn't do this, but luckily there's a small bit of Python code that does, and it's called &amp;lt;code&amp;gt;virtualbmc&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=38</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=38"/>
				<updated>2024-01-19T12:03:26Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: added linux network settings&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You are familiar and comfortable with NetworkManager and the &amp;lt;code&amp;gt;nmcli&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (higher might work, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Network Settings]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisor(s)&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
Beyond the logical requirement of having &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; installed and started, here are the other configuration details for the hypervisor.&lt;br /&gt;
&lt;br /&gt;
=== Network Settings ===&lt;br /&gt;
&lt;br /&gt;
First thing you definitely need to make sure of, is that IP forwarding is enabled.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ sysctl net.ipv4.ip_forward&lt;br /&gt;
1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Linux network settings need to be configured to have two '''Linux''' bridges, a public and a private provisioning one.&lt;br /&gt;
&lt;br /&gt;
* public bridge, call it &amp;lt;code&amp;gt;bridge0&amp;lt;/code&amp;gt;, needs to have the public network interface enslaved to it&lt;br /&gt;
* private bridge, call it &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;, can be a virtual bridge since it is only needed for the provisioning network, which is supposed to be isolated and without any infrastructure services (such as DHCP, DNS, etc.)&lt;br /&gt;
&lt;br /&gt;
It would be wonderful if the bridges could be OpenVSwitch ones, but unfortunately the Terraform bundled with &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; currently does not include an OpenVSwitch provider, so there's goodbye to that.&lt;br /&gt;
&lt;br /&gt;
As an example, here is my host configuration.&lt;br /&gt;
&lt;br /&gt;
Public bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show bridge0&lt;br /&gt;
6: bridge0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 172.25.35.2/24 brd 172.25.35.255 scope global noprefixroute bridge0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::4a21:bff:fe57:e06/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ ip addr show enp86s0&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc mq master bridge0 state UP group default qlen 1000&lt;br /&gt;
    link/ether 48:21:0b:57:0e:06 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master bridge0&amp;quot;&lt;br /&gt;
2: enp86s0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master bridge0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Provisioning bridge:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ip addr show provbr0&lt;br /&gt;
5: provbr0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000&lt;br /&gt;
    link/ether ce:70:26:9c:88:a4 brd ff:ff:ff:ff:ff:ff&lt;br /&gt;
    inet 10.1.1.2/24 brd 10.1.1.255 scope global noprefixroute provbr0&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
    inet6 fe80::cc70:26ff:fe9c:88a4/64 scope link&lt;br /&gt;
       valid_lft forever preferred_lft forever&lt;br /&gt;
&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation Spanning Multiple Hypervisors ===&lt;br /&gt;
&lt;br /&gt;
If you want to have your cluster spanning multiple hypervisors, make sure there is also a VXLAN connection between all the provisioning bridges.&lt;br /&gt;
&lt;br /&gt;
You can do that by creating a &amp;lt;code&amp;gt;vxlan&amp;lt;/code&amp;gt; type interface, which is a slave connection of type &amp;lt;code&amp;gt;bridge&amp;lt;/code&amp;gt;, and the master is set to &amp;lt;code&amp;gt;provbr0&amp;lt;/code&amp;gt;. Choose any unique VXLAN ID, and make sure it is the same on all interconnected hosts.&lt;br /&gt;
&lt;br /&gt;
As an example, here is one VXLAN interface connecting hypervisor A to B.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1703164860&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.2&lt;br /&gt;
vxlan.remote:                           172.25.35.3&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And this is the corresponding VXLAN interface definition connecting host B to A.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nmcli con show  provbr0-vxlan10 | grep -E '^(connection|vxlan)' | grep -vE '(default|uuid|--|-1|unknown)'&lt;br /&gt;
connection.id:                          provbr0-vxlan10&lt;br /&gt;
connection.type:                        vxlan&lt;br /&gt;
connection.interface-name:              provbr0-vxlan10&lt;br /&gt;
connection.autoconnect:                 yes&lt;br /&gt;
connection.autoconnect-priority:        0&lt;br /&gt;
connection.timestamp:                   1697549049&lt;br /&gt;
connection.read-only:                   no&lt;br /&gt;
connection.master:                      provbr0&lt;br /&gt;
connection.slave-type:                  bridge&lt;br /&gt;
connection.gateway-ping-timeout:        0&lt;br /&gt;
vxlan.id:                               10&lt;br /&gt;
vxlan.local:                            172.25.35.3&lt;br /&gt;
vxlan.remote:                           172.25.35.2&lt;br /&gt;
vxlan.source-port-min:                  0&lt;br /&gt;
vxlan.source-port-max:                  0&lt;br /&gt;
vxlan.destination-port:                 4790&lt;br /&gt;
vxlan.tos:                              0&lt;br /&gt;
vxlan.ttl:                              0&lt;br /&gt;
vxlan.ageing:                           300&lt;br /&gt;
vxlan.limit:                            0&lt;br /&gt;
vxlan.learning:                         yes&lt;br /&gt;
vxlan.proxy:                            no&lt;br /&gt;
vxlan.rsc:                              no&lt;br /&gt;
vxlan.l2-miss:                          no&lt;br /&gt;
vxlan.l3-miss:                          no&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, &amp;lt;code&amp;gt;bridge link&amp;lt;/code&amp;gt; will of course initially also show the &amp;lt;code&amp;gt;provbr0-vxlan10&amp;lt;/code&amp;gt; interface as a slave, and will not show empty as above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ bridge link | grep &amp;quot;master provbr0&amp;quot;&lt;br /&gt;
7: provbr0-vxlan10: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 master provbr0 state forwarding priority 32 cost 100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=37</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=37"/>
				<updated>2024-01-19T11:36:45Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: update expectations, add outcomes and prereqs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;qemu-img&amp;lt;/code&amp;gt; tool.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
&lt;br /&gt;
== Outcomes ==&lt;br /&gt;
&lt;br /&gt;
The installation described is for a fully managed IPI running OpenShift Container Platform v4.14, initially with three master and two worker nodes.&lt;br /&gt;
&lt;br /&gt;
''At a later point, I will add a couple of steps needed to grow the cluster by one extra worker node.''&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the cluster:&lt;br /&gt;
&lt;br /&gt;
* 136 GiB RAM (32 GiB per control plane, 20 GiB per compute node), max overcommit ratio of 1.5 (make sure enough swap is available)&lt;br /&gt;
* 52 vCPUs (12 per control plane, 8 per compute node), max overcommit ratio of 1.3 (may work with more, but will slow down the installation horribly and may ultimately fail)&lt;br /&gt;
* one physical network interface that will be used for the public bridged network&lt;br /&gt;
* a physical or virtual network interface that will be used for the provisioning network bridge&lt;br /&gt;
&lt;br /&gt;
Hardware requirements for the installation client (provisioner) machine:&lt;br /&gt;
&lt;br /&gt;
* a minimum of 8 GiB RAM and 4 CPUs&lt;br /&gt;
* a network connection to both the public bridged network and the provisioning network&lt;br /&gt;
&lt;br /&gt;
Due to the fact provisioner needs access to both networks, and the provisioning network in this guide is a virtual one, it might be best if you define the provisioner as a VM, with the same network interface settings as the control/compute nodes.&lt;br /&gt;
&lt;br /&gt;
In the case you want to run the workloads spread across several hypervisor hosts, there are some extra steps, but nothing big. More on that in [[#Host Configuration]] below.&lt;br /&gt;
&lt;br /&gt;
Software artifacts needed on the provisioner host:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt;, the command line client, of the corresponding version - download from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/&lt;br /&gt;
* &amp;lt;code&amp;gt;libvirt-client&amp;lt;/code&amp;gt; package is required for &amp;lt;code&amp;gt;openshift-baremetal-install&amp;lt;/code&amp;gt; to be able to communicate to hypervisors&lt;br /&gt;
* &amp;lt;code&amp;gt;ipmitool&amp;lt;/code&amp;gt; or some other IPMI client&lt;br /&gt;
* a &amp;lt;code&amp;gt;pull-secret&amp;lt;/code&amp;gt; file containing authentication credentials for OpenShift Container Platform registries - download from https://console.redhat.com/openshift/&lt;br /&gt;
* an SSH keypair that can be used for accessing OpenShift nodes&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=36</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=36"/>
				<updated>2024-01-19T11:02:51Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: add expectations&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
* You are familiar and comfortable with &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; CLI and XML.&lt;br /&gt;
* You understand the different types of network interfaces on Linux and different &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt; networks.&lt;br /&gt;
* You know how OpenShift installation works and what the difference between IPI and UPI is.&lt;br /&gt;
* You know about the OpenShift Machine API and various underlying mechanisms.&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=35</id>
		<title>OCP4-IPI-libvirt</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OCP4-IPI-libvirt&amp;diff=35"/>
				<updated>2024-01-12T11:14:59Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: add structure&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
== What I Assume ==&lt;br /&gt;
&lt;br /&gt;
= OpenShift Container Platform IPI Installation Using Libvirt =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
== Host Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Virtual Machine Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installer Configuration ==&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
== Post-Install Smoke Tests ==&lt;br /&gt;
&lt;br /&gt;
= Conclusion =&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OpenShift_Admin&amp;diff=34</id>
		<title>OpenShift Admin</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OpenShift_Admin&amp;diff=34"/>
				<updated>2024-01-12T11:12:07Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: add link to ocp4-ipi-libvirt&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Installation =&lt;br /&gt;
&lt;br /&gt;
* OCP4 IPI vs UPI&lt;br /&gt;
&lt;br /&gt;
* [[OCP4-IPI-libvirt|OCP4 IPI installation using &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt;]]&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OpenShift_Admin&amp;diff=33</id>
		<title>OpenShift Admin</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OpenShift_Admin&amp;diff=33"/>
				<updated>2024-01-12T10:51:36Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: add some installation content&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Installation =&lt;br /&gt;
&lt;br /&gt;
* OCP4 IPI vs UPI&lt;br /&gt;
&lt;br /&gt;
* OCP4 IPI installation using &amp;lt;code&amp;gt;libvirt&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Main_Page&amp;diff=32</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Main_Page&amp;diff=32"/>
				<updated>2024-01-12T10:46:59Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: change java to include quarkus, link to ocp admin&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;List of topics by category:&lt;br /&gt;
&lt;br /&gt;
* Container Technology&lt;br /&gt;
* [[OpenShift Admin|OpenShift Administration]]&lt;br /&gt;
&lt;br /&gt;
* Ansible Automation&lt;br /&gt;
&lt;br /&gt;
* [[Java Devel|Java SE/EE/Quarkus]]&lt;br /&gt;
* [[OpenShift Devel|OpenShift Application Development]]&lt;br /&gt;
&lt;br /&gt;
* macOS Tips &amp;amp; Tricks&lt;br /&gt;
&lt;br /&gt;
* Linux System Administration&lt;br /&gt;
* JBoss EAP Administration&lt;br /&gt;
&lt;br /&gt;
Links will be added as time permits.&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Main_Page&amp;diff=31</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Main_Page&amp;diff=31"/>
				<updated>2024-01-12T10:44:26Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: reorg, add ansible&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;List of topics by category:&lt;br /&gt;
&lt;br /&gt;
* Container Technology&lt;br /&gt;
* OpenShift Administration&lt;br /&gt;
&lt;br /&gt;
* Ansible Automation&lt;br /&gt;
&lt;br /&gt;
* [[Java Devel|Java SE/EE Development]]&lt;br /&gt;
* [[OpenShift Devel|OpenShift Application Development &amp;amp; SDLC]]&lt;br /&gt;
&lt;br /&gt;
* macOS Tips &amp;amp; Tricks&lt;br /&gt;
&lt;br /&gt;
* Linux System Administration&lt;br /&gt;
* JBoss EAP Administration&lt;br /&gt;
&lt;br /&gt;
Links will be added as time permits.&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=New_Features_in_Java_12_through_15&amp;diff=30</id>
		<title>New Features in Java 12 through 15</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=New_Features_in_Java_12_through_15&amp;diff=30"/>
				<updated>2021-01-18T12:04:35Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* What I assume */ change reqs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK versions just keep coming, and since my last article about [[Changes in Java 10 and 11|new features in Java v10 and v11]], there have been four new major Java releases with a number of interesting (and useful) new features.&lt;br /&gt;
&lt;br /&gt;
= What I assume =&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v11 language specification)&lt;br /&gt;
* aware of how Java garbage collectors work and why we need them&lt;br /&gt;
* have a clue about what Linux software packages are and what they do&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Switch Expressions (v12) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/325&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v12: preview&lt;br /&gt;
* v13: preview&lt;br /&gt;
* v14: standard&lt;br /&gt;
&lt;br /&gt;
== Text Blocks (v13) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/355&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v13: preview&lt;br /&gt;
* v14: second preview&lt;br /&gt;
* v15: standard&lt;br /&gt;
&lt;br /&gt;
== Pattern Matching for instanceof (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/305&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: preview&lt;br /&gt;
* v15: second preview&lt;br /&gt;
&lt;br /&gt;
== Records (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/359&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: preview&lt;br /&gt;
* v15: second preview&lt;br /&gt;
&lt;br /&gt;
== Sealed Classes (v15) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/360&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v15: preview&lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Shenandoah Garbage Collector (v12) ==&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v12: experimental&lt;br /&gt;
* v15: GA&lt;br /&gt;
&lt;br /&gt;
== Abortable Mixed Collections for G1 (v12) ==&lt;br /&gt;
&lt;br /&gt;
https://openjdk.java.net/jeps/344&lt;br /&gt;
&lt;br /&gt;
Make G1 mixed collections abortable if they might exceed the pause target.&lt;br /&gt;
&lt;br /&gt;
== Promptly Return Unused Committed Memory from G1 (v12) ==&lt;br /&gt;
&lt;br /&gt;
https://openjdk.java.net/jeps/346&lt;br /&gt;
&lt;br /&gt;
G1 only returns memory from the Java heap at either a full GC or during a concurrent cycle. Since G1 tries hard to completely avoid full GCs, and only triggers a concurrent cycle based on Java heap occupancy and allocation activity, it will not return Java heap memory in many cases unless forced to do so externally.&lt;br /&gt;
&lt;br /&gt;
This feature enhances the G1 garbage collector to automatically return Java heap memory to the operating system when idle.&lt;br /&gt;
&lt;br /&gt;
== Return Unused Memory to OS from ZGC (v13) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/351&lt;br /&gt;
&lt;br /&gt;
== Bye-bye, CMS (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/363&lt;br /&gt;
&lt;br /&gt;
== Hidden Classes (v15) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/371&lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== New OS-Native Packaging Tool for Self-Contained Apps (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/343&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: incubator&lt;br /&gt;
&lt;br /&gt;
== Better NullPointerException Messages (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/358&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=New_Features_in_Java_12_through_15&amp;diff=29</id>
		<title>New Features in Java 12 through 15</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=New_Features_in_Java_12_through_15&amp;diff=29"/>
				<updated>2021-01-18T11:25:34Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Typo.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK versions just keep coming, and since my last article about [[Changes in Java 10 and 11|new features in Java v10 and v11]], there have been four new major Java releases with a number of interesting (and useful) new features.&lt;br /&gt;
&lt;br /&gt;
= What I assume =&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v11 language specification)&lt;br /&gt;
* aware of how Java garbage collectors work and why we need them&lt;br /&gt;
* familiar with what software profiling and auditing are and how they work&lt;br /&gt;
* have a clue about Linux scripting&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Switch Expressions (v12) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/325&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v12: preview&lt;br /&gt;
* v13: preview&lt;br /&gt;
* v14: standard&lt;br /&gt;
&lt;br /&gt;
== Text Blocks (v13) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/355&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v13: preview&lt;br /&gt;
* v14: second preview&lt;br /&gt;
* v15: standard&lt;br /&gt;
&lt;br /&gt;
== Pattern Matching for instanceof (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/305&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: preview&lt;br /&gt;
* v15: second preview&lt;br /&gt;
&lt;br /&gt;
== Records (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/359&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: preview&lt;br /&gt;
* v15: second preview&lt;br /&gt;
&lt;br /&gt;
== Sealed Classes (v15) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/360&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v15: preview&lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Shenandoah Garbage Collector (v12) ==&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v12: experimental&lt;br /&gt;
* v15: GA&lt;br /&gt;
&lt;br /&gt;
== Abortable Mixed Collections for G1 (v12) ==&lt;br /&gt;
&lt;br /&gt;
https://openjdk.java.net/jeps/344&lt;br /&gt;
&lt;br /&gt;
Make G1 mixed collections abortable if they might exceed the pause target.&lt;br /&gt;
&lt;br /&gt;
== Promptly Return Unused Committed Memory from G1 (v12) ==&lt;br /&gt;
&lt;br /&gt;
https://openjdk.java.net/jeps/346&lt;br /&gt;
&lt;br /&gt;
G1 only returns memory from the Java heap at either a full GC or during a concurrent cycle. Since G1 tries hard to completely avoid full GCs, and only triggers a concurrent cycle based on Java heap occupancy and allocation activity, it will not return Java heap memory in many cases unless forced to do so externally.&lt;br /&gt;
&lt;br /&gt;
This feature enhances the G1 garbage collector to automatically return Java heap memory to the operating system when idle.&lt;br /&gt;
&lt;br /&gt;
== Return Unused Memory to OS from ZGC (v13) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/351&lt;br /&gt;
&lt;br /&gt;
== Bye-bye, CMS (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/363&lt;br /&gt;
&lt;br /&gt;
== Hidden Classes (v15) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/371&lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== New OS-Native Packaging Tool for Self-Contained Apps (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/343&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: incubator&lt;br /&gt;
&lt;br /&gt;
== Better NullPointerException Messages (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/358&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=New_Features_in_Java_12_through_15&amp;diff=28</id>
		<title>New Features in Java 12 through 15</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=New_Features_in_Java_12_through_15&amp;diff=28"/>
				<updated>2021-01-18T11:25:00Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Placeholders with links and metadata.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK versions just keep coming, and since my last article about [[Changes in Java 10 and 11|new features in Java v10 and v11]], there have been three new major Java releases with a number of interesting (and useful) new features.&lt;br /&gt;
&lt;br /&gt;
= What I assume =&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v11 language specification)&lt;br /&gt;
* aware of how Java garbage collectors work and why we need them&lt;br /&gt;
* familiar with what software profiling and auditing are and how they work&lt;br /&gt;
* have a clue about Linux scripting&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Switch Expressions (v12) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/325&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v12: preview&lt;br /&gt;
* v13: preview&lt;br /&gt;
* v14: standard&lt;br /&gt;
&lt;br /&gt;
== Text Blocks (v13) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/355&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v13: preview&lt;br /&gt;
* v14: second preview&lt;br /&gt;
* v15: standard&lt;br /&gt;
&lt;br /&gt;
== Pattern Matching for instanceof (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/305&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: preview&lt;br /&gt;
* v15: second preview&lt;br /&gt;
&lt;br /&gt;
== Records (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/359&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: preview&lt;br /&gt;
* v15: second preview&lt;br /&gt;
&lt;br /&gt;
== Sealed Classes (v15) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/360&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v15: preview&lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Shenandoah Garbage Collector (v12) ==&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v12: experimental&lt;br /&gt;
* v15: GA&lt;br /&gt;
&lt;br /&gt;
== Abortable Mixed Collections for G1 (v12) ==&lt;br /&gt;
&lt;br /&gt;
https://openjdk.java.net/jeps/344&lt;br /&gt;
&lt;br /&gt;
Make G1 mixed collections abortable if they might exceed the pause target.&lt;br /&gt;
&lt;br /&gt;
== Promptly Return Unused Committed Memory from G1 (v12) ==&lt;br /&gt;
&lt;br /&gt;
https://openjdk.java.net/jeps/346&lt;br /&gt;
&lt;br /&gt;
G1 only returns memory from the Java heap at either a full GC or during a concurrent cycle. Since G1 tries hard to completely avoid full GCs, and only triggers a concurrent cycle based on Java heap occupancy and allocation activity, it will not return Java heap memory in many cases unless forced to do so externally.&lt;br /&gt;
&lt;br /&gt;
This feature enhances the G1 garbage collector to automatically return Java heap memory to the operating system when idle.&lt;br /&gt;
&lt;br /&gt;
== Return Unused Memory to OS from ZGC (v13) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/351&lt;br /&gt;
&lt;br /&gt;
== Bye-bye, CMS (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/363&lt;br /&gt;
&lt;br /&gt;
== Hidden Classes (v15) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/371&lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== New OS-Native Packaging Tool for Self-Contained Apps (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/343&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v14: incubator&lt;br /&gt;
&lt;br /&gt;
== Better NullPointerException Messages (v14) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/358&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=27</id>
		<title>Changes in Java 10 and 11</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=27"/>
				<updated>2021-01-18T11:15:53Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* ZGC (v11) */ added v14 news&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK 11 was released almost two months ago (as of this writing), yet it almost seems as if nobody noticed it.&lt;br /&gt;
&lt;br /&gt;
After the roar and thunder of Java 9's JPMS (or Jigsaw, as some may know it) it's gone awfully quiet on the Java front.&lt;br /&gt;
&lt;br /&gt;
But that's far from saying nothing is happening - after [http://openjdk.java.net/jeps/261 Java Module System] (which was about the single biggest feature in [https://docs.oracle.com/javase/9/whatsnew/toc.htm Java 9]), the world moves on.&lt;br /&gt;
&lt;br /&gt;
If version 9 was mostly a clean-up release with the much-hated Java Plugin and the Applet API having been deprecated, CMS GC and all garbage collector combinations involving it, as well, several rather big improvements and additions had been made to both the language and the virtual machine in the two subsequent versions, [https://openjdk.java.net/projects/jdk/10/ Java 10] and [https://openjdk.java.net/projects/jdk/11/ Java 11].&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v8 language specification)&lt;br /&gt;
* aware of how Java garbage collectors work and why we need them&lt;br /&gt;
* familiar with what software profiling and auditing are and how they work&lt;br /&gt;
* have a clue about Linux scripting&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Type Inference for Local Variables (v10) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/286&lt;br /&gt;
&lt;br /&gt;
Among the biggest additions to Java 10 is its [https://openjdk.java.net/jeps/286 ability to figure out the type of a variable] that has an initialiser, that is to say, perform LHS (left-hand-side) type inference.&lt;br /&gt;
&lt;br /&gt;
RHS type inference should be quite familiar by now as it was by far the biggest change in Java 8 - [https://openjdk.java.net/projects/lambda/ Lambda Expressions for the Java Language] allow a programmer to save space, time, and improve readability, making heavy use of type inference all the time:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent('''event -&amp;gt; new EventProcessor().process(event)''');&lt;br /&gt;
&lt;br /&gt;
Actually, the above expression can be simplified even further:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent('''EventProcessor::process''');&lt;br /&gt;
&lt;br /&gt;
Notice how parameters in lambda expressions lack type declarations. Infact, they completely lack ''any'' declarations! You can [[Java 8 Lambda Expressions|read more about this in a small write-up of mine]].&lt;br /&gt;
&lt;br /&gt;
In Java 10, this inference is extended even further. For cases when the compiler can infer the type of a variable, its declaration can be simplified to just the newly introduced ''reserved type'' &amp;lt;code&amp;gt;var&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 '''var''' x = new Integer(5);&lt;br /&gt;
 '''var''' y = MyClass.Factory.getInstance();&lt;br /&gt;
 &lt;br /&gt;
 if (x instanceof Integer) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
 if (y instanceof MyClass) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
There are obviously some limitations for this new expression, such as:&lt;br /&gt;
&lt;br /&gt;
* declarations without initialisers (which would require some distant action to infer the type, and that could violate the strongly-typed nature of the language because of multiple possible ''different'' initialisers at different execution points),&lt;br /&gt;
* &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; assignments (note that &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; ''is'' a type, but it is fairly useless in and of its own),&lt;br /&gt;
* [http://www.devcodenote.com/2015/04/variable-capture-in-java.html capture variables] (which need to be final, so they can not be inferred),&lt;br /&gt;
* [http://iteratrlearning.com/java/generics/2016/05/12/intersection-types-java-generics.html intersection types] (which can not be reliably inferred),&lt;br /&gt;
&lt;br /&gt;
or generally whenever an assignment type is not denotable (such as with inline arrays, method references, and even &amp;lt;code&amp;gt;c.getClass()&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
I may write more about those limitations when I've used LHS type inference more, until then feel free to have a look at [https://openjdk.java.net/jeps/286 the JEP defining Java type inference] and its many references.&lt;br /&gt;
&lt;br /&gt;
== Local Variable Syntax for Lambda Parameters (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/323&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== New Nest-Based Field/Method Access Control (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/181&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Application Class Data Sharing (v10) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/310&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== No-Op Garbage Collector (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/318&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== ZGC (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/333&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v11: experimental (Linux-only)&lt;br /&gt;
* v14: experimental ports for macOS and Windows&lt;br /&gt;
* v15: GA&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== Flight Recorder (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/328&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Low-Impact Heap Profiling (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/331&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Launching Single-File Programs from Source (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/330&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=26</id>
		<title>Changes in Java 10 and 11</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=26"/>
				<updated>2021-01-18T11:00:48Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Added JEPs.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK 11 was released almost two months ago (as of this writing), yet it almost seems as if nobody noticed it.&lt;br /&gt;
&lt;br /&gt;
After the roar and thunder of Java 9's JPMS (or Jigsaw, as some may know it) it's gone awfully quiet on the Java front.&lt;br /&gt;
&lt;br /&gt;
But that's far from saying nothing is happening - after [http://openjdk.java.net/jeps/261 Java Module System] (which was about the single biggest feature in [https://docs.oracle.com/javase/9/whatsnew/toc.htm Java 9]), the world moves on.&lt;br /&gt;
&lt;br /&gt;
If version 9 was mostly a clean-up release with the much-hated Java Plugin and the Applet API having been deprecated, CMS GC and all garbage collector combinations involving it, as well, several rather big improvements and additions had been made to both the language and the virtual machine in the two subsequent versions, [https://openjdk.java.net/projects/jdk/10/ Java 10] and [https://openjdk.java.net/projects/jdk/11/ Java 11].&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v8 language specification)&lt;br /&gt;
* aware of how Java garbage collectors work and why we need them&lt;br /&gt;
* familiar with what software profiling and auditing are and how they work&lt;br /&gt;
* have a clue about Linux scripting&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Type Inference for Local Variables (v10) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/286&lt;br /&gt;
&lt;br /&gt;
Among the biggest additions to Java 10 is its [https://openjdk.java.net/jeps/286 ability to figure out the type of a variable] that has an initialiser, that is to say, perform LHS (left-hand-side) type inference.&lt;br /&gt;
&lt;br /&gt;
RHS type inference should be quite familiar by now as it was by far the biggest change in Java 8 - [https://openjdk.java.net/projects/lambda/ Lambda Expressions for the Java Language] allow a programmer to save space, time, and improve readability, making heavy use of type inference all the time:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent('''event -&amp;gt; new EventProcessor().process(event)''');&lt;br /&gt;
&lt;br /&gt;
Actually, the above expression can be simplified even further:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent('''EventProcessor::process''');&lt;br /&gt;
&lt;br /&gt;
Notice how parameters in lambda expressions lack type declarations. Infact, they completely lack ''any'' declarations! You can [[Java 8 Lambda Expressions|read more about this in a small write-up of mine]].&lt;br /&gt;
&lt;br /&gt;
In Java 10, this inference is extended even further. For cases when the compiler can infer the type of a variable, its declaration can be simplified to just the newly introduced ''reserved type'' &amp;lt;code&amp;gt;var&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 '''var''' x = new Integer(5);&lt;br /&gt;
 '''var''' y = MyClass.Factory.getInstance();&lt;br /&gt;
 &lt;br /&gt;
 if (x instanceof Integer) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
 if (y instanceof MyClass) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
There are obviously some limitations for this new expression, such as:&lt;br /&gt;
&lt;br /&gt;
* declarations without initialisers (which would require some distant action to infer the type, and that could violate the strongly-typed nature of the language because of multiple possible ''different'' initialisers at different execution points),&lt;br /&gt;
* &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; assignments (note that &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; ''is'' a type, but it is fairly useless in and of its own),&lt;br /&gt;
* [http://www.devcodenote.com/2015/04/variable-capture-in-java.html capture variables] (which need to be final, so they can not be inferred),&lt;br /&gt;
* [http://iteratrlearning.com/java/generics/2016/05/12/intersection-types-java-generics.html intersection types] (which can not be reliably inferred),&lt;br /&gt;
&lt;br /&gt;
or generally whenever an assignment type is not denotable (such as with inline arrays, method references, and even &amp;lt;code&amp;gt;c.getClass()&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
I may write more about those limitations when I've used LHS type inference more, until then feel free to have a look at [https://openjdk.java.net/jeps/286 the JEP defining Java type inference] and its many references.&lt;br /&gt;
&lt;br /&gt;
== Local Variable Syntax for Lambda Parameters (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/323&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== New Nest-Based Field/Method Access Control (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/181&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Application Class Data Sharing (v10) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/310&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== No-Op Garbage Collector (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/318&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== ZGC (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/333&lt;br /&gt;
&lt;br /&gt;
Lifecycle phases:&lt;br /&gt;
&lt;br /&gt;
* v11: experimental&lt;br /&gt;
* v15: GA&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== Flight Recorder (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/328&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Low-Impact Heap Profiling (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/331&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Launching Single-File Programs from Source (v11) ==&lt;br /&gt;
&lt;br /&gt;
* JEP: https://openjdk.java.net/jeps/330&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Java_Devel&amp;diff=25</id>
		<title>Java Devel</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Java_Devel&amp;diff=25"/>
				<updated>2021-01-18T10:45:46Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Java SE */ Added v12-v15&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Java SE =&lt;br /&gt;
&lt;br /&gt;
* [[Java 8 Lambda Expressions]]&lt;br /&gt;
* [[Changes in Java 10 and 11]]&lt;br /&gt;
* [[New Features in Java 12 through 15]]&lt;br /&gt;
&lt;br /&gt;
= Java EE =&lt;br /&gt;
&lt;br /&gt;
Something coming here.&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Java_8_Lambda_Expressions&amp;diff=24</id>
		<title>Java 8 Lambda Expressions</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Java_8_Lambda_Expressions&amp;diff=24"/>
				<updated>2018-12-07T17:54:39Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Added advanced.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Lambda Expressions for the Java language are meant to reduce bracket noise, thus saving time and improving readability of Java code.&lt;br /&gt;
&lt;br /&gt;
However, they can only be used under certain conditions, and in this article I want to expand on what these conditions are.&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
To best focus on the content of this article, I assume the reader is:&lt;br /&gt;
&lt;br /&gt;
* fluent in Java language (according to v7 language specification)&lt;br /&gt;
&lt;br /&gt;
= The Anatomy of a Lambda Expression =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Java 8 introduced ''functional interfaces'', a special type of an interface which only has one abstract method.&lt;br /&gt;
&lt;br /&gt;
The key excerpts from the ''[https://www.jcp.org/en/jsr/detail?id=337 Java 8 Language Specification]'' that Lambda Expressions build upon are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;In addition to the usual process of creating an interface instance by declaring and instantiating a class (§15.9), instances of functional interfaces can be created with method reference expressions and lambda expressions (§15.13, §15.27).&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;The function type of a functional interface I is a method type (§8.2) that can be used to override (§8.4.8) the abstract method(s) of I.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above two excerpts define when lambda expressions can be used, and define the rules for deriving the signature of that expression from the function that the interface proposes.&lt;br /&gt;
&lt;br /&gt;
== Building on Foundation ==&lt;br /&gt;
&lt;br /&gt;
TROLOLO.&lt;br /&gt;
&lt;br /&gt;
== Advanced Shit ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;When the parameter types of a lambda expression are inferred, the same lambda body can be interpreted in different ways, depending on the context in which it appears.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
== Parameterless Void Method ==&lt;br /&gt;
&lt;br /&gt;
Consider interface &amp;lt;code&amp;gt;Runnable&amp;lt;/code&amp;gt;, which requires you to supply a &amp;lt;code&amp;gt;void run()&amp;lt;/code&amp;gt; method in an implementation:&lt;br /&gt;
&lt;br /&gt;
 public interface Runnable {&lt;br /&gt;
     void run();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
This is a common way of starting &amp;lt;code&amp;gt;Thread&amp;lt;/code&amp;gt;s:&lt;br /&gt;
&lt;br /&gt;
 public void startThread() {&lt;br /&gt;
     Runnable r = new Runnable() {&lt;br /&gt;
         public void run() {&lt;br /&gt;
             new JobRunner().doJob();&lt;br /&gt;
         }&lt;br /&gt;
     };&lt;br /&gt;
     Thread t = new Thread(r);&lt;br /&gt;
     t.start();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Instead of going the long way, an anonymous inner class can be used in constructor parameter:&lt;br /&gt;
&lt;br /&gt;
 public void startThread() {&lt;br /&gt;
     Thread t = new Thread(new Runnable() {&lt;br /&gt;
         public void run() {&lt;br /&gt;
             new JobRunner().doJob();&lt;br /&gt;
         }&lt;br /&gt;
     });&lt;br /&gt;
     t.start();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
One can now use a lambda expression to reduce the amount of boilerplate:&lt;br /&gt;
&lt;br /&gt;
 public class MyCode {&lt;br /&gt;
     // ...&lt;br /&gt;
     public void startThread() {&lt;br /&gt;
         Runnable r = () -&amp;gt; { new JobRunner().doJob(); };&lt;br /&gt;
         Thread t = new Thread(r);&lt;br /&gt;
         t.start();&lt;br /&gt;
     }&lt;br /&gt;
     // ...&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Or even?&lt;br /&gt;
&lt;br /&gt;
 public class MyCode {&lt;br /&gt;
     // ...&lt;br /&gt;
     public void startThread() {&lt;br /&gt;
         Thread t = new Thread(() -&amp;gt; { new JobRunner().doJob(); });&lt;br /&gt;
         t.start();&lt;br /&gt;
     }&lt;br /&gt;
     // ...&lt;br /&gt;
 }&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Java_8_Lambda_Expressions&amp;diff=23</id>
		<title>Java 8 Lambda Expressions</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Java_8_Lambda_Expressions&amp;diff=23"/>
				<updated>2018-12-07T13:51:13Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Shuffled bits around.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Lambda Expressions for the Java language are meant to reduce bracket noise, thus saving time and improving readability of Java code.&lt;br /&gt;
&lt;br /&gt;
However, they can only be used under certain conditions, and in this article I want to expand on what these conditions are.&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
To best focus on the content of this article, I assume the reader is:&lt;br /&gt;
&lt;br /&gt;
* fluent in Java language (according to v7 language specification)&lt;br /&gt;
&lt;br /&gt;
= The Anatomy of a Lambda Expression =&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
Java 8 introduced ''functional interfaces'', a special type of an interface which only has one abstract method.&lt;br /&gt;
&lt;br /&gt;
The key excerpts from the ''[https://www.jcp.org/en/jsr/detail?id=337 Java 8 Language Specification]'' that Lambda Expressions build upon are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;In addition to the usual process of creating an interface instance by declaring and instantiating a class (§15.9), instances of functional interfaces can be created with method reference expressions and lambda expressions (§15.13, §15.27).&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;The function type of a functional interface I is a method type (§8.2) that can be used to override (§8.4.8) the abstract method(s) of I.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above two excerpts define when lambda expressions can be used, and define the rules for deriving the signature of that expression from the function that the interface proposes.&lt;br /&gt;
&lt;br /&gt;
== Building on Foundations ==&lt;br /&gt;
&lt;br /&gt;
TROLOLO.&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
== Parameterless Void Method ==&lt;br /&gt;
&lt;br /&gt;
Consider interface &amp;lt;code&amp;gt;Runnable&amp;lt;/code&amp;gt;, which requires you to supply a &amp;lt;code&amp;gt;void run()&amp;lt;/code&amp;gt; method in an implementation:&lt;br /&gt;
&lt;br /&gt;
 public interface Runnable {&lt;br /&gt;
     void run();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
This is a common way of starting &amp;lt;code&amp;gt;Thread&amp;lt;/code&amp;gt;s:&lt;br /&gt;
&lt;br /&gt;
 public void startThread() {&lt;br /&gt;
     Runnable r = new Runnable() {&lt;br /&gt;
         public void run() {&lt;br /&gt;
             new JobRunner().doJob();&lt;br /&gt;
         }&lt;br /&gt;
     };&lt;br /&gt;
     Thread t = new Thread(r);&lt;br /&gt;
     t.start();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Instead of going the long way, an anonymous inner class can be used in constructor parameter:&lt;br /&gt;
&lt;br /&gt;
 public void startThread() {&lt;br /&gt;
     Thread t = new Thread(new Runnable() {&lt;br /&gt;
         public void run() {&lt;br /&gt;
             new JobRunner().doJob();&lt;br /&gt;
         }&lt;br /&gt;
     });&lt;br /&gt;
     t.start();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
One can now use a lambda expression to reduce the amount of boilerplate:&lt;br /&gt;
&lt;br /&gt;
 public class MyCode {&lt;br /&gt;
     // ...&lt;br /&gt;
     public void startThread() {&lt;br /&gt;
         Runnable r = () -&amp;gt; { new JobRunner().doJob(); };&lt;br /&gt;
         Thread t = new Thread(r);&lt;br /&gt;
         t.start();&lt;br /&gt;
     }&lt;br /&gt;
     // ...&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Or even?&lt;br /&gt;
&lt;br /&gt;
 public class MyCode {&lt;br /&gt;
     // ...&lt;br /&gt;
     public void startThread() {&lt;br /&gt;
         Thread t = new Thread(() -&amp;gt; { new JobRunner().doJob(); });&lt;br /&gt;
         t.start();&lt;br /&gt;
     }&lt;br /&gt;
     // ...&lt;br /&gt;
 }&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Java_8_Lambda_Expressions&amp;diff=22</id>
		<title>Java 8 Lambda Expressions</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Java_8_Lambda_Expressions&amp;diff=22"/>
				<updated>2018-12-07T13:13:55Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Added assumptions and intro.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
Java 8 introduced ''functional interfaces'', a special type of an interface which only has one abstract method.&lt;br /&gt;
&lt;br /&gt;
The key excerpt from the ''[https://www.jcp.org/en/jsr/detail?id=337 Java 8 Language Specification]'' related to the subject of this article seems to be:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;In addition to the usual process of creating an interface instance by declaring and instantiating a class (§15.9), instances of functional interfaces can be created with method reference expressions and lambda expressions (§15.13, §15.27).&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
To best focus on the content of this article, I assume the reader is:&lt;br /&gt;
&lt;br /&gt;
* fluent in Java language (according to v7 language specification)&lt;br /&gt;
&lt;br /&gt;
= Examples =&lt;br /&gt;
&lt;br /&gt;
== Parameterless Void Method ==&lt;br /&gt;
&lt;br /&gt;
Consider interface &amp;lt;code&amp;gt;Runnable&amp;lt;/code&amp;gt;, which requires you to supply a &amp;lt;code&amp;gt;void run()&amp;lt;/code&amp;gt; method in an implementation:&lt;br /&gt;
&lt;br /&gt;
 public interface Runnable {&lt;br /&gt;
     void run();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
This is a common way of starting &amp;lt;code&amp;gt;Thread&amp;lt;/code&amp;gt;s:&lt;br /&gt;
&lt;br /&gt;
 public void startThread() {&lt;br /&gt;
     Runnable r = new Runnable() {&lt;br /&gt;
         public void run() {&lt;br /&gt;
             new JobRunner().doJob();&lt;br /&gt;
         }&lt;br /&gt;
     };&lt;br /&gt;
     Thread t = new Thread(r);&lt;br /&gt;
     t.start();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Instead of going the long way, an anonymous inner class can be used in constructor parameter:&lt;br /&gt;
&lt;br /&gt;
 public void startThread() {&lt;br /&gt;
     Thread t = new Thread(new Runnable() {&lt;br /&gt;
         public void run() {&lt;br /&gt;
             new JobRunner().doJob();&lt;br /&gt;
         }&lt;br /&gt;
     });&lt;br /&gt;
     t.start();&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
One can now use a lambda expression to reduce the amount of boilerplate:&lt;br /&gt;
&lt;br /&gt;
 public class MyCode {&lt;br /&gt;
     // ...&lt;br /&gt;
     public void startThread() {&lt;br /&gt;
         Runnable r = () -&amp;gt; { new JobRunner().doJob(); };&lt;br /&gt;
         Thread t = new Thread(r);&lt;br /&gt;
         t.start();&lt;br /&gt;
     }&lt;br /&gt;
     // ...&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Or even?&lt;br /&gt;
&lt;br /&gt;
 public class MyCode {&lt;br /&gt;
     // ...&lt;br /&gt;
     public void startThread() {&lt;br /&gt;
         Thread t = new Thread(() -&amp;gt; { new JobRunner().doJob(); });&lt;br /&gt;
         t.start();&lt;br /&gt;
     }&lt;br /&gt;
     // ...&lt;br /&gt;
 }&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=21</id>
		<title>Changes in Java 10 and 11</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=21"/>
				<updated>2018-11-26T08:05:27Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: /* Type Inference for Local Variables (v10) */ Added a method reference example. Fixed some formatting.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK 11 was released almost two months ago (as of this writing), yet it almost seems as if nobody noticed it.&lt;br /&gt;
&lt;br /&gt;
After the roar and thunder of Java 9's JPMS (or Jigsaw, as some may know it) it's gone awfully quiet on the Java front.&lt;br /&gt;
&lt;br /&gt;
But that's far from saying nothing is happening - after [http://openjdk.java.net/jeps/261 Java Module System] (which was about the single biggest feature in [https://docs.oracle.com/javase/9/whatsnew/toc.htm Java 9]), the world moves on.&lt;br /&gt;
&lt;br /&gt;
If version 9 was mostly a clean-up release with the much-hated Java Plugin and the Applet API having been deprecated, CMS GC and all garbage collector combinations involving it, as well, several rather big improvements and additions had been made to both the language and the virtual machine in the two subsequent versions, [https://openjdk.java.net/projects/jdk/10/ Java 10] and [https://openjdk.java.net/projects/jdk/11/ Java 11].&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v8 language specification)&lt;br /&gt;
* aware of how Java garbage collectors work and why we need them&lt;br /&gt;
* familiar with what software profiling and auditing are and how they work&lt;br /&gt;
* have a clue about Linux scripting&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Type Inference for Local Variables (v10) ==&lt;br /&gt;
&lt;br /&gt;
Among the biggest additions to Java 10 is its [https://openjdk.java.net/jeps/286 ability to figure out the type of a variable] that has an initialiser, that is to say, perform LHS (left-hand-side) type inference.&lt;br /&gt;
&lt;br /&gt;
RHS type inference should be quite familiar by now as it was by far the biggest change in Java 8 - [https://openjdk.java.net/projects/lambda/ Lambda Expressions for the Java Language] allow a programmer to save space, time, and improve readability, making heavy use of type inference all the time:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent('''event -&amp;gt; new EventProcessor().process(event)''');&lt;br /&gt;
&lt;br /&gt;
Actually, the above expression can be simplified even further:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent('''EventProcessor::process''');&lt;br /&gt;
&lt;br /&gt;
Notice how parameters in lambda expressions lack type declarations. Infact, they completely lack ''any'' declarations! You can [[Java 8 Lambda Expressions|read more about this in a small write-up of mine]].&lt;br /&gt;
&lt;br /&gt;
In Java 10, this inference is extended even further. For cases when the compiler can infer the type of a variable, its declaration can be simplified to just the newly introduced ''reserved type'' &amp;lt;code&amp;gt;var&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 '''var''' x = new Integer(5);&lt;br /&gt;
 '''var''' y = MyClass.Factory.getInstance();&lt;br /&gt;
 &lt;br /&gt;
 if (x instanceof Integer) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
 if (y instanceof MyClass) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
There are obviously some limitations for this new expression, such as:&lt;br /&gt;
&lt;br /&gt;
* declarations without initialisers (which would require some distant action to infer the type, and that could violate the strongly-typed nature of the language because of multiple possible ''different'' initialisers at different execution points),&lt;br /&gt;
* &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; assignments (note that &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; ''is'' a type, but it is fairly useless in and of its own),&lt;br /&gt;
* [http://www.devcodenote.com/2015/04/variable-capture-in-java.html capture variables] (which need to be final, so they can not be inferred),&lt;br /&gt;
* [http://iteratrlearning.com/java/generics/2016/05/12/intersection-types-java-generics.html intersection types] (which can not be reliably inferred),&lt;br /&gt;
&lt;br /&gt;
or generally whenever an assignment type is not denotable (such as with inline arrays, method references, and even &amp;lt;code&amp;gt;c.getClass()&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
I may write more about those limitations when I've used LHS type inference more, until then feel free to have a look at [https://openjdk.java.net/jeps/286 the JEP defining Java type inference] and its many references.&lt;br /&gt;
&lt;br /&gt;
== Local Variable Syntax for Lambda Parameters (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== New Nest-Based Field/Method Access Control (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Application Class Data Sharing (v10) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== No-Op Garbage Collector (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== ZGC (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== Flight Recorder (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Low-Impact Heap Profiling (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Launching Single-File Programs from Source (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=OpenShift_Devel&amp;diff=20</id>
		<title>OpenShift Devel</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=OpenShift_Devel&amp;diff=20"/>
				<updated>2018-11-26T07:15:55Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= General Topics =&lt;br /&gt;
&lt;br /&gt;
TBD.&lt;br /&gt;
&lt;br /&gt;
= Local OpenShift Development =&lt;br /&gt;
&lt;br /&gt;
* [[CDK Tips]]&lt;br /&gt;
&lt;br /&gt;
= Remote Cluster Development =&lt;br /&gt;
&lt;br /&gt;
Something coming here.&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=CDK_Tips&amp;diff=19</id>
		<title>CDK Tips</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=CDK_Tips&amp;diff=19"/>
				<updated>2018-11-26T07:12:34Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Added assumptions.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
CDK and Minishift are a nice, portable way of running a single-node OCP/OKD cluster on your development workstation. I collected some useful tips for an easier life with them.&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* proficient in using the &amp;lt;code&amp;gt;bash&amp;lt;/code&amp;gt; shell&lt;br /&gt;
* understand the basic of container mechanics such as:&lt;br /&gt;
** what is an image&lt;br /&gt;
** why do you need a registry&lt;br /&gt;
* know how to work with containers using &amp;lt;code&amp;gt;docker&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;podman&amp;lt;/code&amp;gt;&lt;br /&gt;
* have used CDK or Minishift before&lt;br /&gt;
&lt;br /&gt;
= Various CDK (Minishift) Tips =&lt;br /&gt;
&lt;br /&gt;
First off: I shall drop the Minishift term here (unless where absolutely necessary) and regard CDK as an alias for Minishift.&lt;br /&gt;
&lt;br /&gt;
== Configuration / Start Up ==&lt;br /&gt;
&lt;br /&gt;
=== Useful Startup Options ===&lt;br /&gt;
&lt;br /&gt;
Sometimes you need to figure out what the hell went wrong and why CDK is acting up on you. Try these:&lt;br /&gt;
&lt;br /&gt;
 $ '''cdk start --alsologtostderr --show-libmachine-logs -v 3'''&lt;br /&gt;
&lt;br /&gt;
This will bump the log level up to ''debug'' and be very noisy on the console, but at least you'll get some info about what's cooking.&lt;br /&gt;
&lt;br /&gt;
I personally tend to use these a lot when I'm playing around with some low-level settings and I need to see which components are affected by my changes.&lt;br /&gt;
&lt;br /&gt;
=== Offline Use ===&lt;br /&gt;
&lt;br /&gt;
Because I spend way too much time in airplanes and other transport modalities where internet is more or less absent, I've had a jolly fun time trying to make CDK work offline.&lt;br /&gt;
&lt;br /&gt;
One of these days I'll write a howto on getting CDK to work '''''fully offline''''', with a local registry VM, a Gogs instance, and working DNS resolution.&lt;br /&gt;
&lt;br /&gt;
Until then, there are some simple tricks that should work, provided you have all the images you need in local cache and all the hostnames you need in &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Some of the interesting options:&lt;br /&gt;
&lt;br /&gt;
; &amp;lt;code&amp;gt;--skip-registration&amp;lt;/code&amp;gt;&lt;br /&gt;
: '''(CDK only)''' This will not attempt to register your VM in the Red Hat Customer Portal.&lt;br /&gt;
; &amp;lt;code&amp;gt;--skip-registry-check&amp;lt;/code&amp;gt;&lt;br /&gt;
: This will skip the test for online registry availability, but will fail horribly somewhere down the line, unless you're certain you really have all the platform images ''for your version of OCP/OKD''.&lt;br /&gt;
; &amp;lt;code&amp;gt;--skip-startup-checks&amp;lt;/code&amp;gt;&lt;br /&gt;
: This will skip all other start-up checks (such as image versions, oc client availability, etc.) not only the online ones. Make sure your CDK is in good shape by running the startup checks at least once before starting to skip them.&lt;br /&gt;
&lt;br /&gt;
Have a look at &amp;lt;code&amp;gt;cdk config&amp;lt;/code&amp;gt; to see the list of individual checks you can skip (look for the &amp;lt;code&amp;gt;skip-check&amp;lt;/code&amp;gt; pattern):&lt;br /&gt;
&lt;br /&gt;
 $ '''cdk config | grep skip-check'''&lt;br /&gt;
  * skip-check-deprecation&lt;br /&gt;
  * skip-check-kvm-driver&lt;br /&gt;
  * skip-check-xhyve-driver&lt;br /&gt;
  * skip-check-hyperv-driver&lt;br /&gt;
  * skip-check-iso-url&lt;br /&gt;
  * skip-check-vm-driver&lt;br /&gt;
  * skip-check-vbox-installed&lt;br /&gt;
  * skip-check-openshift-version&lt;br /&gt;
  * skip-check-openshift-release&lt;br /&gt;
  * skip-check-clusterup-flags&lt;br /&gt;
  * skip-check-instance-ip&lt;br /&gt;
  * skip-check-network-host&lt;br /&gt;
  * skip-check-network-ping&lt;br /&gt;
  * skip-check-network-http&lt;br /&gt;
  * skip-check-storage-mount&lt;br /&gt;
  * skip-check-storage-usage&lt;br /&gt;
  * skip-check-nameservers&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' Unfortunately, although they are listed as configuration options in &amp;lt;code&amp;gt;cdk config&amp;lt;/code&amp;gt; output, these are ignored if you set them using &amp;lt;code&amp;gt;cdk config set&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Administration ==&lt;br /&gt;
&lt;br /&gt;
=== CDK Administration ===&lt;br /&gt;
&lt;br /&gt;
TBD.&lt;br /&gt;
&lt;br /&gt;
=== Cluster Administration ===&lt;br /&gt;
&lt;br /&gt;
Sometimes you need to do something as &amp;lt;code&amp;gt;system:admin&amp;lt;/code&amp;gt;, not just any cluster admin.&lt;br /&gt;
&lt;br /&gt;
For example: in the ''CDK 3.6 / OCP 3.11'' combo, the &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; addon fails for some reason, which means you end up with no remote cluster admin capability. What now?&lt;br /&gt;
&lt;br /&gt;
* First, get a shell in the &amp;lt;code&amp;gt;origin&amp;lt;/code&amp;gt; container inside the ''boot2docker'' VM:&lt;br /&gt;
 $ '''cdk ssh'''&lt;br /&gt;
 Last login: Fri Nov 16 06:25:09 2018 from gateway&lt;br /&gt;
 [docker@minishift ~]$ '''docker exec -it origin sh'''&lt;br /&gt;
 sh-4.2#&lt;br /&gt;
&lt;br /&gt;
* Notice that the &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; command is not configured to load auth data by default:&lt;br /&gt;
 sh-4.2# '''oc whoami'''&lt;br /&gt;
 error: Missing or incomplete configuration info.  Please login or point to an existing, complete config file:&lt;br /&gt;
 &lt;br /&gt;
   1. Via the command-line flag --config&lt;br /&gt;
   2. Via the KUBECONFIG environment variable&lt;br /&gt;
   3. In your home directory as ~/.kube/config&lt;br /&gt;
 &lt;br /&gt;
 To view or setup config directly use the 'config' command.&lt;br /&gt;
&lt;br /&gt;
* Next, run the &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; command with the &amp;lt;code&amp;gt;--config&amp;lt;/code&amp;gt; option, telling it where to find the &amp;lt;code&amp;gt;kubeconfig&amp;lt;/code&amp;gt; file with &amp;lt;code&amp;gt;system:admin&amp;lt;/code&amp;gt; credentials:&lt;br /&gt;
 sh-4.2# '''oc --config=./openshift.local.config/master/admin.kubeconfig whoami'''&lt;br /&gt;
 system:admin&lt;br /&gt;
&lt;br /&gt;
* Then, depending on how long you intend to spend in the VM, alias the &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sh-4.2# '''alias oc=&amp;quot;oc --config=./openshift.local.config/master/admin.kubeconfig&amp;quot;'''&lt;br /&gt;
: Alternatively, you could set the &amp;lt;code&amp;gt;KUBECONFIG&amp;lt;/code&amp;gt; env variable, of course.&lt;br /&gt;
&lt;br /&gt;
* Then simply do your stuff!&lt;br /&gt;
 sh-4.2# '''oc describe clusterrolebinding cluster-admin'''&lt;br /&gt;
 Name:         cluster-admin&lt;br /&gt;
 Labels:       kubernetes.io/bootstrapping=rbac-defaults&lt;br /&gt;
 Annotations:  rbac.authorization.kubernetes.io/autoupdate=true&lt;br /&gt;
 Role:&lt;br /&gt;
   Kind:  ClusterRole&lt;br /&gt;
   Name:  cluster-admin&lt;br /&gt;
 Subjects:&lt;br /&gt;
   Kind   Name            Namespace&lt;br /&gt;
   ----   ----            ---------&lt;br /&gt;
   Group  system:masters  &lt;br /&gt;
 sh-4.2# '''oc get users'''&lt;br /&gt;
 NAME        UID                                    FULL NAME   IDENTITIES&lt;br /&gt;
 admin       6f98e840-e990-11e8-8676-16ea592bfedd               anypassword:admin&lt;br /&gt;
 developer   d482a716-e98f-11e8-8676-16ea592bfedd               anypassword:developer&lt;br /&gt;
 sh-4.2# '''oc adm policy add-cluster-role-to-user cluster-admin admin'''&lt;br /&gt;
 cluster role &amp;quot;cluster-admin&amp;quot; added: &amp;quot;admin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Local Shell Environment ==&lt;br /&gt;
&lt;br /&gt;
=== Bash Completion for CDK ===&lt;br /&gt;
&lt;br /&gt;
==== Enabling ====&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;cdk completion bash&amp;lt;/code&amp;gt; will output a '''bash completion''' recipe.&lt;br /&gt;
&lt;br /&gt;
You can then place it into &amp;lt;code&amp;gt;/etc/bash_completion.d&amp;lt;/code&amp;gt; (or wherever your shell expects it).&lt;br /&gt;
: (hint: on macOS, if using MacPorts, this is &amp;lt;code&amp;gt;/opt/local/etc/bash_completion.d&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
==== CDK vs Minishift ====&lt;br /&gt;
&lt;br /&gt;
One caveat is that if you're using &amp;lt;code&amp;gt;cdk&amp;lt;/code&amp;gt; rather than &amp;lt;code&amp;gt;minishift&amp;lt;/code&amp;gt;, you'll have to replace (almost) everything in that file that says &amp;lt;code&amp;gt;minishift&amp;lt;/code&amp;gt;, with &amp;lt;code&amp;gt;cdk&amp;lt;/code&amp;gt; (because &amp;lt;code&amp;gt;--minishift-home&amp;lt;/code&amp;gt; is a valid option for both).&lt;br /&gt;
&lt;br /&gt;
So, a shortcut to getting it to work with &amp;lt;code&amp;gt;cdk&amp;lt;/code&amp;gt; in a single pipeline would be:&lt;br /&gt;
&lt;br /&gt;
 $ cdk completion bash | sed 's/minishift/cdk/g; s/-cdk-/-minishift-/g' &amp;gt; cdk&lt;br /&gt;
 $ sudo mv cdk /etc/bash_completion.d/&lt;br /&gt;
 $ . /etc/bash_completion.d/cdk&lt;br /&gt;
 $ cdk '''&amp;lt;TAB&amp;gt;&amp;lt;TAB&amp;gt;'''&lt;br /&gt;
 addons      config      delete      docker-env  image       logs        openshift   setup-cdk   start       stop        &lt;br /&gt;
 completion  console     dns         hostfolder  ip          oc-env      profile     ssh         status      version     &lt;br /&gt;
&lt;br /&gt;
Voila!&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=CDK_Tips&amp;diff=18</id>
		<title>CDK Tips</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=CDK_Tips&amp;diff=18"/>
				<updated>2018-11-26T07:02:49Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Fixed a typo.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Various CDK (Minishift) Tips =&lt;br /&gt;
&lt;br /&gt;
First off: I shall drop the Minishift term here (unless where absolutely necessary) and regard CDK as an alias for Minishift.&lt;br /&gt;
&lt;br /&gt;
== Configuration / Start Up ==&lt;br /&gt;
&lt;br /&gt;
=== Useful Startup Options ===&lt;br /&gt;
&lt;br /&gt;
Sometimes you need to figure out what the hell went wrong and why CDK is acting up on you. Try these:&lt;br /&gt;
&lt;br /&gt;
 $ '''cdk start --alsologtostderr --show-libmachine-logs -v 3'''&lt;br /&gt;
&lt;br /&gt;
This will bump the log level up to ''debug'' and be very noisy on the console, but at least you'll get some info about what's cooking.&lt;br /&gt;
&lt;br /&gt;
I personally tend to use these a lot when I'm playing around with some low-level settings and I need to see which components are affected by my changes.&lt;br /&gt;
&lt;br /&gt;
=== Offline Use ===&lt;br /&gt;
&lt;br /&gt;
Because I spend way too much time in airplanes and other transport modalities where internet is more or less absent, I've had a jolly fun time trying to make CDK work offline.&lt;br /&gt;
&lt;br /&gt;
One of these days I'll write a howto on getting CDK to work '''''fully offline''''', with a local registry VM, a Gogs instance, and working DNS resolution.&lt;br /&gt;
&lt;br /&gt;
Until then, there are some simple tricks that should work, provided you have all the images you need in local cache and all the hostnames you need in &amp;lt;code&amp;gt;/etc/hosts&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Some of the interesting options:&lt;br /&gt;
&lt;br /&gt;
; &amp;lt;code&amp;gt;--skip-registration&amp;lt;/code&amp;gt;&lt;br /&gt;
: '''(CDK only)''' This will not attempt to register your VM in the Red Hat Customer Portal.&lt;br /&gt;
; &amp;lt;code&amp;gt;--skip-registry-check&amp;lt;/code&amp;gt;&lt;br /&gt;
: This will skip the test for online registry availability, but will fail horribly somewhere down the line, unless you're certain you really have all the platform images ''for your version of OCP/OKD''.&lt;br /&gt;
; &amp;lt;code&amp;gt;--skip-startup-checks&amp;lt;/code&amp;gt;&lt;br /&gt;
: This will skip all other start-up checks (such as image versions, oc client availability, etc.) not only the online ones. Make sure your CDK is in good shape by running the startup checks at least once before starting to skip them.&lt;br /&gt;
&lt;br /&gt;
Have a look at &amp;lt;code&amp;gt;cdk config&amp;lt;/code&amp;gt; to see the list of individual checks you can skip (look for the &amp;lt;code&amp;gt;skip-check&amp;lt;/code&amp;gt; pattern):&lt;br /&gt;
&lt;br /&gt;
 $ '''cdk config | grep skip-check'''&lt;br /&gt;
  * skip-check-deprecation&lt;br /&gt;
  * skip-check-kvm-driver&lt;br /&gt;
  * skip-check-xhyve-driver&lt;br /&gt;
  * skip-check-hyperv-driver&lt;br /&gt;
  * skip-check-iso-url&lt;br /&gt;
  * skip-check-vm-driver&lt;br /&gt;
  * skip-check-vbox-installed&lt;br /&gt;
  * skip-check-openshift-version&lt;br /&gt;
  * skip-check-openshift-release&lt;br /&gt;
  * skip-check-clusterup-flags&lt;br /&gt;
  * skip-check-instance-ip&lt;br /&gt;
  * skip-check-network-host&lt;br /&gt;
  * skip-check-network-ping&lt;br /&gt;
  * skip-check-network-http&lt;br /&gt;
  * skip-check-storage-mount&lt;br /&gt;
  * skip-check-storage-usage&lt;br /&gt;
  * skip-check-nameservers&lt;br /&gt;
&lt;br /&gt;
'''NOTE:''' Unfortunately, although they are listed as configuration options in &amp;lt;code&amp;gt;cdk config&amp;lt;/code&amp;gt; output, these are ignored if you set them using &amp;lt;code&amp;gt;cdk config set&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Administration ==&lt;br /&gt;
&lt;br /&gt;
=== CDK Administration ===&lt;br /&gt;
&lt;br /&gt;
TBD.&lt;br /&gt;
&lt;br /&gt;
=== Cluster Administration ===&lt;br /&gt;
&lt;br /&gt;
Sometimes you need to do something as &amp;lt;code&amp;gt;system:admin&amp;lt;/code&amp;gt;, not just any cluster admin.&lt;br /&gt;
&lt;br /&gt;
For example: in the ''CDK 3.6 / OCP 3.11'' combo, the &amp;lt;code&amp;gt;admin&amp;lt;/code&amp;gt; addon fails for some reason, which means you end up with no remote cluster admin capability. What now?&lt;br /&gt;
&lt;br /&gt;
* First, get a shell in the &amp;lt;code&amp;gt;origin&amp;lt;/code&amp;gt; container inside the ''boot2docker'' VM:&lt;br /&gt;
 $ '''cdk ssh'''&lt;br /&gt;
 Last login: Fri Nov 16 06:25:09 2018 from gateway&lt;br /&gt;
 [docker@minishift ~]$ '''docker exec -it origin sh'''&lt;br /&gt;
 sh-4.2#&lt;br /&gt;
&lt;br /&gt;
* Notice that the &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; command is not configured to load auth data by default:&lt;br /&gt;
 sh-4.2# '''oc whoami'''&lt;br /&gt;
 error: Missing or incomplete configuration info.  Please login or point to an existing, complete config file:&lt;br /&gt;
 &lt;br /&gt;
   1. Via the command-line flag --config&lt;br /&gt;
   2. Via the KUBECONFIG environment variable&lt;br /&gt;
   3. In your home directory as ~/.kube/config&lt;br /&gt;
 &lt;br /&gt;
 To view or setup config directly use the 'config' command.&lt;br /&gt;
&lt;br /&gt;
* Next, run the &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; command with the &amp;lt;code&amp;gt;--config&amp;lt;/code&amp;gt; option, telling it where to find the &amp;lt;code&amp;gt;kubeconfig&amp;lt;/code&amp;gt; file with &amp;lt;code&amp;gt;system:admin&amp;lt;/code&amp;gt; credentials:&lt;br /&gt;
 sh-4.2# '''oc --config=./openshift.local.config/master/admin.kubeconfig whoami'''&lt;br /&gt;
 system:admin&lt;br /&gt;
&lt;br /&gt;
* Then, depending on how long you intend to spend in the VM, alias the &amp;lt;code&amp;gt;oc&amp;lt;/code&amp;gt; command:&lt;br /&gt;
 sh-4.2# '''alias oc=&amp;quot;oc --config=./openshift.local.config/master/admin.kubeconfig&amp;quot;'''&lt;br /&gt;
: Alternatively, you could set the &amp;lt;code&amp;gt;KUBECONFIG&amp;lt;/code&amp;gt; env variable, of course.&lt;br /&gt;
&lt;br /&gt;
* Then simply do your stuff!&lt;br /&gt;
 sh-4.2# '''oc describe clusterrolebinding cluster-admin'''&lt;br /&gt;
 Name:         cluster-admin&lt;br /&gt;
 Labels:       kubernetes.io/bootstrapping=rbac-defaults&lt;br /&gt;
 Annotations:  rbac.authorization.kubernetes.io/autoupdate=true&lt;br /&gt;
 Role:&lt;br /&gt;
   Kind:  ClusterRole&lt;br /&gt;
   Name:  cluster-admin&lt;br /&gt;
 Subjects:&lt;br /&gt;
   Kind   Name            Namespace&lt;br /&gt;
   ----   ----            ---------&lt;br /&gt;
   Group  system:masters  &lt;br /&gt;
 sh-4.2# '''oc get users'''&lt;br /&gt;
 NAME        UID                                    FULL NAME   IDENTITIES&lt;br /&gt;
 admin       6f98e840-e990-11e8-8676-16ea592bfedd               anypassword:admin&lt;br /&gt;
 developer   d482a716-e98f-11e8-8676-16ea592bfedd               anypassword:developer&lt;br /&gt;
 sh-4.2# '''oc adm policy add-cluster-role-to-user cluster-admin admin'''&lt;br /&gt;
 cluster role &amp;quot;cluster-admin&amp;quot; added: &amp;quot;admin&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Local Shell Environment ==&lt;br /&gt;
&lt;br /&gt;
=== Bash Completion for CDK ===&lt;br /&gt;
&lt;br /&gt;
==== Enabling ====&lt;br /&gt;
&lt;br /&gt;
Running &amp;lt;code&amp;gt;cdk completion bash&amp;lt;/code&amp;gt; will output a '''bash completion''' recipe.&lt;br /&gt;
&lt;br /&gt;
You can then place it into &amp;lt;code&amp;gt;/etc/bash_completion.d&amp;lt;/code&amp;gt; (or wherever your shell expects it).&lt;br /&gt;
: (hint: on macOS, if using MacPorts, this is &amp;lt;code&amp;gt;/opt/local/etc/bash_completion.d&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
==== CDK vs Minishift ====&lt;br /&gt;
&lt;br /&gt;
One caveat is that if you're using &amp;lt;code&amp;gt;cdk&amp;lt;/code&amp;gt; rather than &amp;lt;code&amp;gt;minishift&amp;lt;/code&amp;gt;, you'll have to replace (almost) everything in that file that says &amp;lt;code&amp;gt;minishift&amp;lt;/code&amp;gt;, with &amp;lt;code&amp;gt;cdk&amp;lt;/code&amp;gt; (because &amp;lt;code&amp;gt;--minishift-home&amp;lt;/code&amp;gt; is a valid option for both).&lt;br /&gt;
&lt;br /&gt;
So, a shortcut to getting it to work with &amp;lt;code&amp;gt;cdk&amp;lt;/code&amp;gt; in a single pipeline would be:&lt;br /&gt;
&lt;br /&gt;
 $ cdk completion bash | sed 's/minishift/cdk/g; s/-cdk-/-minishift-/g' &amp;gt; cdk&lt;br /&gt;
 $ sudo mv cdk /etc/bash_completion.d/&lt;br /&gt;
 $ . /etc/bash_completion.d/cdk&lt;br /&gt;
 $ cdk '''&amp;lt;TAB&amp;gt;&amp;lt;TAB&amp;gt;'''&lt;br /&gt;
 addons      config      delete      docker-env  image       logs        openshift   setup-cdk   start       stop        &lt;br /&gt;
 completion  console     dns         hostfolder  ip          oc-env      profile     ssh         status      version     &lt;br /&gt;
&lt;br /&gt;
Voila!&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=17</id>
		<title>Changes in Java 10 and 11</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=17"/>
				<updated>2018-11-25T19:17:27Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Some more assumptions.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK 11 was released almost two months ago (as of this writing), yet it almost seems as if nobody noticed it.&lt;br /&gt;
&lt;br /&gt;
After the roar and thunder of Java 9's JPMS (or Jigsaw, as some may know it) it's gone awfully quiet on the Java front.&lt;br /&gt;
&lt;br /&gt;
But that's far from saying nothing is happening - after [http://openjdk.java.net/jeps/261 Java Module System] (which was about the single biggest feature in [https://docs.oracle.com/javase/9/whatsnew/toc.htm Java 9]), the world moves on.&lt;br /&gt;
&lt;br /&gt;
If version 9 was mostly a clean-up release with the much-hated Java Plugin and the Applet API having been deprecated, CMS GC and all garbage collector combinations involving it, as well, several rather big improvements and additions had been made to both the language and the virtual machine in the two subsequent versions, [https://openjdk.java.net/projects/jdk/10/ Java 10] and [https://openjdk.java.net/projects/jdk/11/ Java 11].&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v8 language specification)&lt;br /&gt;
* aware of how Java garbage collectors work and why we need them&lt;br /&gt;
* familiar with what software profiling and auditing are and how they work&lt;br /&gt;
* have a clue about Linux scripting&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Type Inference for Local Variables (v10) ==&lt;br /&gt;
&lt;br /&gt;
Among the biggest additions to Java 10 is its [https://openjdk.java.net/jeps/286 ability to figure out the type of a variable] that has an initialiser, that is to say, perform LHS (left-hand-side) type inference.&lt;br /&gt;
&lt;br /&gt;
RHS type inference should be quite familiar by now as it was by far the biggest change in Java 8 - [https://openjdk.java.net/projects/lambda/ Lambda Expressions for the Java Language] allow a programmer to save space, time, and improve readability, making heavy use of type inference all the time:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent(event -&amp;gt; new EventProcessor().process(event));&lt;br /&gt;
&lt;br /&gt;
Notice how parameters in lambda expressions lack type declarations. Infact, they completely lack ''any'' declarations! You can [[Java 8 Lambda Expressions|read more about this in a small write-up of mine]].&lt;br /&gt;
&lt;br /&gt;
In Java 10, this inference is extended even further. For cases when the compiler can infer the type of a variable, its declaration can be simplified to just the newly introduced ''reserved type'' &amp;lt;code&amp;gt;var&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 var x = new Integer(5);&lt;br /&gt;
 var y = MyClass.Factory.getInstance();&lt;br /&gt;
 &lt;br /&gt;
 if (x instanceof Integer) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
 if (y instanceof MyClass) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
There are obviously some limitations for this new expression, such as:&lt;br /&gt;
&lt;br /&gt;
* declarations without initialisers (which would require some distant action to infer the type, and that could violate the strongly-typed nature of the language because of multiple possible ''different'' initialisers at different execution points),&lt;br /&gt;
* &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; assignments (note that &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; ''is'' a type, but it is fairly useless in and of its own),&lt;br /&gt;
* [http://www.devcodenote.com/2015/04/variable-capture-in-java.html capture variables] (which need to be final, so they can not be inferred),&lt;br /&gt;
* [http://iteratrlearning.com/java/generics/2016/05/12/intersection-types-java-generics.html intersection types] (which can not be reliably inferred),&lt;br /&gt;
&lt;br /&gt;
or generally whenever an assignment type is not denotable (such as with inline arrays, method references, and even &amp;lt;code&amp;gt;c.getClass()&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
I may write more about those limitations when I've used LHS type inference more, until then feel free to have a look at [https://openjdk.java.net/jeps/286 the JEP defining Java type inference] and its many references.&lt;br /&gt;
&lt;br /&gt;
== Local Variable Syntax for Lambda Parameters (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== New Nest-Based Field/Method Access Control (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Application Class Data Sharing (v10) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== No-Op Garbage Collector (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== ZGC (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== Flight Recorder (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Low-Impact Heap Profiling (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Launching Single-File Programs from Source (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=16</id>
		<title>Changes in Java 10 and 11</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=16"/>
				<updated>2018-11-25T19:16:04Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Added assumptions.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK 11 was released almost two months ago (as of this writing), yet it almost seems as if nobody noticed it.&lt;br /&gt;
&lt;br /&gt;
After the roar and thunder of Java 9's JPMS (or Jigsaw, as some may know it) it's gone awfully quiet on the Java front.&lt;br /&gt;
&lt;br /&gt;
But that's far from saying nothing is happening - after [http://openjdk.java.net/jeps/261 Java Module System] (which was about the single biggest feature in [https://docs.oracle.com/javase/9/whatsnew/toc.htm Java 9]), the world moves on.&lt;br /&gt;
&lt;br /&gt;
If version 9 was mostly a clean-up release with the much-hated Java Plugin and the Applet API having been deprecated, CMS GC and all garbage collector combinations involving it, as well, several rather big improvements and additions had been made to both the language and the virtual machine in the two subsequent versions, [https://openjdk.java.net/projects/jdk/10/ Java 10] and [https://openjdk.java.net/projects/jdk/11/ Java 11].&lt;br /&gt;
&lt;br /&gt;
== What I assume ==&lt;br /&gt;
&lt;br /&gt;
I assume that you are:&lt;br /&gt;
&lt;br /&gt;
* fluent in the Java language (according to v8 language specification)&lt;br /&gt;
* know how Java garbage collectors work and why we need them&lt;br /&gt;
* have a clue about Linux scripting&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Type Inference for Local Variables (v10) ==&lt;br /&gt;
&lt;br /&gt;
Among the biggest additions to Java 10 is its [https://openjdk.java.net/jeps/286 ability to figure out the type of a variable] that has an initialiser, that is to say, perform LHS (left-hand-side) type inference.&lt;br /&gt;
&lt;br /&gt;
RHS type inference should be quite familiar by now as it was by far the biggest change in Java 8 - [https://openjdk.java.net/projects/lambda/ Lambda Expressions for the Java Language] allow a programmer to save space, time, and improve readability, making heavy use of type inference all the time:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent(event -&amp;gt; new EventProcessor().process(event));&lt;br /&gt;
&lt;br /&gt;
Notice how parameters in lambda expressions lack type declarations. Infact, they completely lack ''any'' declarations! You can [[Java 8 Lambda Expressions|read more about this in a small write-up of mine]].&lt;br /&gt;
&lt;br /&gt;
In Java 10, this inference is extended even further. For cases when the compiler can infer the type of a variable, its declaration can be simplified to just the newly introduced ''reserved type'' &amp;lt;code&amp;gt;var&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 var x = new Integer(5);&lt;br /&gt;
 var y = MyClass.Factory.getInstance();&lt;br /&gt;
 &lt;br /&gt;
 if (x instanceof Integer) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
 if (y instanceof MyClass) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
There are obviously some limitations for this new expression, such as:&lt;br /&gt;
&lt;br /&gt;
* declarations without initialisers (which would require some distant action to infer the type, and that could violate the strongly-typed nature of the language because of multiple possible ''different'' initialisers at different execution points),&lt;br /&gt;
* &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; assignments (note that &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; ''is'' a type, but it is fairly useless in and of its own),&lt;br /&gt;
* [http://www.devcodenote.com/2015/04/variable-capture-in-java.html capture variables] (which need to be final, so they can not be inferred),&lt;br /&gt;
* [http://iteratrlearning.com/java/generics/2016/05/12/intersection-types-java-generics.html intersection types] (which can not be reliably inferred),&lt;br /&gt;
&lt;br /&gt;
or generally whenever an assignment type is not denotable (such as with inline arrays, method references, and even &amp;lt;code&amp;gt;c.getClass()&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
I may write more about those limitations when I've used LHS type inference more, until then feel free to have a look at [https://openjdk.java.net/jeps/286 the JEP defining Java type inference] and its many references.&lt;br /&gt;
&lt;br /&gt;
== Local Variable Syntax for Lambda Parameters (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== New Nest-Based Field/Method Access Control (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Application Class Data Sharing (v10) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== No-Op Garbage Collector (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== ZGC (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== Flight Recorder (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Low-Impact Heap Profiling (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Launching Single-File Programs from Source (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	<entry>
		<id>https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=15</id>
		<title>Changes in Java 10 and 11</title>
		<link rel="alternate" type="text/html" href="https://p0f.net/index.php?title=Changes_in_Java_10_and_11&amp;diff=15"/>
				<updated>2018-11-21T19:10:51Z</updated>
		
		<summary type="html">&lt;p&gt;Gregab: Added bits.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
JDK 11 was released almost two months ago (as of this writing), yet it almost seems as if nobody noticed it.&lt;br /&gt;
&lt;br /&gt;
After the roar and thunder of Java 9's JPMS (or Jigsaw, as some may know it) it's gone awfully quiet on the Java front.&lt;br /&gt;
&lt;br /&gt;
But that's far from saying nothing is happening - after [http://openjdk.java.net/jeps/261 Java Module System] (which was about the single biggest feature in [https://docs.oracle.com/javase/9/whatsnew/toc.htm Java 9]), the world moves on.&lt;br /&gt;
&lt;br /&gt;
If version 9 was mostly a clean-up release with the much-hated Java Plugin and the Applet API having been deprecated, CMS GC and all garbage collector combinations involving it, as well, several rather big improvements and additions had been made to both the language and the virtual machine in the two subsequent versions, [https://openjdk.java.net/projects/jdk/10/ Java 10] and [https://openjdk.java.net/projects/jdk/11/ Java 11].&lt;br /&gt;
&lt;br /&gt;
= Language =&lt;br /&gt;
&lt;br /&gt;
== Type Inference for Local Variables (v10) ==&lt;br /&gt;
&lt;br /&gt;
Among the biggest additions to Java 10 is its [https://openjdk.java.net/jeps/286 ability to figure out the type of a variable] that has an initialiser, that is to say, perform LHS (left-hand-side) type inference.&lt;br /&gt;
&lt;br /&gt;
RHS type inference should be quite familiar by now as it was by far the biggest change in Java 8 - [https://openjdk.java.net/projects/lambda/ Lambda Expressions for the Java Language] allow a programmer to save space, time, and improve readability, making heavy use of type inference all the time:&lt;br /&gt;
&lt;br /&gt;
 MockEventHandler eh = new MockEventHandler();&lt;br /&gt;
 eh.setOnSomeEvent(event -&amp;gt; new EventProcessor().process(event));&lt;br /&gt;
&lt;br /&gt;
Notice how parameters in lambda expressions lack type declarations. Infact, they completely lack ''any'' declarations! You can [[Java 8 Lambda Expressions|read more about this in a small write-up of mine]].&lt;br /&gt;
&lt;br /&gt;
In Java 10, this inference is extended even further. For cases when the compiler can infer the type of a variable, its declaration can be simplified to just the newly introduced ''reserved type'' &amp;lt;code&amp;gt;var&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 var x = new Integer(5);&lt;br /&gt;
 var y = MyClass.Factory.getInstance();&lt;br /&gt;
 &lt;br /&gt;
 if (x instanceof Integer) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
 if (y instanceof MyClass) {&lt;br /&gt;
     // matches&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
There are obviously some limitations for this new expression, such as:&lt;br /&gt;
&lt;br /&gt;
* declarations without initialisers (which would require some distant action to infer the type, and that could violate the strongly-typed nature of the language because of multiple possible ''different'' initialisers at different execution points),&lt;br /&gt;
* &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; assignments (note that &amp;lt;code&amp;gt;null&amp;lt;/code&amp;gt; ''is'' a type, but it is fairly useless in and of its own),&lt;br /&gt;
* [http://www.devcodenote.com/2015/04/variable-capture-in-java.html capture variables] (which need to be final, so they can not be inferred),&lt;br /&gt;
* [http://iteratrlearning.com/java/generics/2016/05/12/intersection-types-java-generics.html intersection types] (which can not be reliably inferred),&lt;br /&gt;
&lt;br /&gt;
or generally whenever an assignment type is not denotable (such as with inline arrays, method references, and even &amp;lt;code&amp;gt;c.getClass()&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
I may write more about those limitations when I've used LHS type inference more, until then feel free to have a look at [https://openjdk.java.net/jeps/286 the JEP defining Java type inference] and its many references.&lt;br /&gt;
&lt;br /&gt;
== Local Variable Syntax for Lambda Parameters (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== New Nest-Based Field/Method Access Control (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Virtual Machine =&lt;br /&gt;
&lt;br /&gt;
== Application Class Data Sharing (v10) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== No-Op Garbage Collector (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== ZGC (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
= Tools =&lt;br /&gt;
&lt;br /&gt;
== Flight Recorder (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Low-Impact Heap Profiling (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;br /&gt;
&lt;br /&gt;
== Launching Single-File Programs from Source (v11) ==&lt;br /&gt;
&lt;br /&gt;
BLABLA&lt;/div&gt;</summary>
		<author><name>Gregab</name></author>	</entry>

	</feed>