|
| 1 | +# Installation in a Restricted Network |
| 2 | + |
| 3 | +Installation in an restricted (network) environment is going to be different. In such a setting, the base cluster (bootstrap, masters[0,3], workers[0,3]) won't have open access to the internet. The only access this core infrastucture will be allowed to have, is to a registry on a node/VM that will mirror the contents of the installation repos hosted on quay.io. |
| 4 | + |
| 5 | +This documentation will guide you in using this repo to setup this registry for installation in such as restricted network. |
| 6 | + |
| 7 | +## Prerequisites |
| 8 | +0. Familiarity with this repo and a thorough reading of [README](README.md) |
| 9 | +1. Prepare a RHEL 8/Fedora VM or reuse `helper` node as registry host |
| 10 | + * Run `yum install -y podman httpd httpd-tools` when the VM is connected to internet |
| 11 | +2. The `helper` is the `bastion` host and as such the installation msut be run on the `helper` |
| 12 | + |
| 13 | +## (Optional) Network isolation for OpenShift VMs + registry in vCenter |
| 14 | +> This section is meant for a lab environment, to practice a disconnected install. The subnets and IP addresses used below are shown only as an illustration. |
| 15 | +
|
| 16 | +### [Step 1] Create a Standard Network Port Group |
| 17 | +1. Right click on vSphere host 🠪 Configure 🠪 Networking 🠪 Virtual Switches |
| 18 | +2. Click on `ADD NETWORKING` button on the page (top right hand corner) |
| 19 | +3. Select `Virtual Machine Port Group for a Standard Switch` and click `NEXT` |
| 20 | +4. Select `New standard switch` (with defaults) and click `NEXT` |
| 21 | +5. Click `NEXT` for Step 3 |
| 22 | +6. Click `OK` for the warning that there are no active physical network adapters |
| 23 | +7. Give a name for the port-group and choose a number between 0-4095 for VLAN ID and click `NEXT` |
| 24 | +8. Click `FINISH` on the final screen |
| 25 | + |
| 26 | +When all done your setting should resemble somewhat like this image with the new `default` virtual switch. |
| 27 | + |
| 28 | +[](.images/virtual-switch.png) |
| 29 | + |
| 30 | +### [Step 2] Convert helper into a bastion host |
| 31 | +1. Right click on the `helper` VM and click on `Edit Settings` |
| 32 | +2. Click on the `ADD NEW DEVICE` (top right hand corner) when in the tab `Virtual Hardware` |
| 33 | +3. Choose `Network Adapter` and when its added, click on `Browse` under the drop-down for network, choose the newly added port-group and then click on `OK` |
| 34 | +4. SSH'ing into helper and using `ifconfig` determine the name of the new NIC. In my homelab, its `ens224`. |
| 35 | + * Assuming you assigned a static IP address to the first NIC `ens192`, copy `ifcfg-ens192` in `/etc/sysconfig/network-scripts` and save it as `ifcfg-ens224` in the same folder. |
| 36 | + * Edit the file `ifcfg-ens224` and ensure that the IP assigned is on a different subnet |
| 37 | + > In my homelab, `ens192` was in `192.168.86.0/24` subnet with GATEWAY pointing to 192.168.86.1 and `ens224` was in `192.168.87.0/24` subnet with GATWAY pointing at 192.168.87.1 |
| 38 | +5. Restart the network with `systemctl restart NetworkManager`, a quick `ifconfig` or `nmcli device show ens224` should show the IP address picked up by the new NIC. |
| 39 | + |
| 40 | +### [Step 3] Create a new VM for registry or reuse helper |
| 41 | + |
| 42 | +#### If creating a new VM for registry (not re-using helper): |
| 43 | +1. Ensure that VM is setup, *connected to internet* and #2 of prerequisites above is run |
| 44 | +2. Assign it as hostname similar to `registry.ocp4.example.com` |
| 45 | +3. Create a `ifcfg-ens192` file under `/etc/sysconfig/network-scripts`, for reference my file looks like this : |
| 46 | + ```sh |
| 47 | + TYPE="Ethernet" |
| 48 | + PROXY_METHOD="none" |
| 49 | + BROWSER_ONLY="no" |
| 50 | + BOOTPROTO="dhcp" |
| 51 | + DEFROUTE="yes" |
| 52 | + IPV4_FAILURE_FATAL="no" |
| 53 | + IPV6INIT="yes" |
| 54 | + IPV6_AUTOCONF="yes" |
| 55 | + IPV6_DEFROUTE="yes" |
| 56 | + IPV6_FAILURE_FATAL="no" |
| 57 | + IPV6_ADDR_GEN_MODE="stable-privacy" |
| 58 | + NAME="ens192" |
| 59 | + DEVICE="ens192" |
| 60 | + ONBOOT="yes" |
| 61 | + IPV6_PRIVACY="no" |
| 62 | + ``` |
| 63 | + |
| 64 | +### [Step 4] Re-run helper playbook |
| 65 | + |
| 66 | +In the helper `vars.yml` file ensure that all IP addresses (helper + bootstrap+ masters + workers) now belong to the new subnet `192.168.87.0/24`, that includes changing `helper.ipaddr` and `helper.networkifacename` to the new network adpater settings. |
| 67 | + |
| 68 | +#### If creating a new VM for registry (not re-using helper) |
| 69 | +Make accomdations for registry node: `registry.ocp4.example.com` by changing the helper's DNS and DHCP config files as shown: |
| 70 | +1. Add a section for registry in helper's `vars.yml` file, as shown below. The `macaddr` should reflect the MAC address assigned to `ens192` adapter: |
| 71 | + ``` |
| 72 | + registry: |
| 73 | + name: "registry" |
| 74 | + ipaddr: "192.168.87.188" |
| 75 | + macaddr: "00:50:56:a8:4b:4f" |
| 76 | + ``` |
| 77 | +2. Add the following line to `templates/dhcpd.conf.j2` under the Static entries (for example, below the line for bootstrap) |
| 78 | + ``` |
| 79 | + host {{ registry.name }} { hardware ethernet {{ registry.macaddr }}; fixed-address {{ registry.ipaddr }}; } |
| 80 | + ``` |
| 81 | +3. Add the following line to `templates/zonefile.j2` (for example, below the line for bootstrap) |
| 82 | + ``` |
| 83 | + ; Create entry for the registry host |
| 84 | + {{ registry.name }} IN A {{ registry.ipaddr }} |
| 85 | + ; |
| 86 | + ``` |
| 87 | +4. Add the following line to `templates/reverse.j2` (for example, below the line for bootstrap) |
| 88 | + ``` |
| 89 | + {{ registry.ipaddr.split('.')[3] }} IN PTR {{ registry.name }}.{{ dns.clusterid }}.{{ dns.domain }}. |
| 90 | + ; |
| 91 | + ``` |
| 92 | + |
| 93 | +Now that helper is all set with is configuration, lets re-run the playbook and when it goes to success, reboot `registry.ocp4.example.com` so that it could pickup its IP address via DHCP. |
| 94 | + |
| 95 | +## Run Ansible Automation |
| 96 | + |
| 97 | +### Configurations |
| 98 | + |
| 99 | +Modify `staging` file to look like below: |
| 100 | +``` |
| 101 | +all: |
| 102 | + hosts: |
| 103 | + localhost: |
| 104 | + ansible_connection: local |
| 105 | + children: |
| 106 | + webservers: |
| 107 | + hosts: |
| 108 | + localhost: |
| 109 | + registries: |
| 110 | + hosts: |
| 111 | + registry.ocp4.example.com: |
| 112 | + ansible_ssh_user: root |
| 113 | + ansible_ssh_pass: <password for ease of installation> |
| 114 | +``` |
| 115 | +> If reusing the helper the hostname under registries would be `localhost` and the credentials underneath removed as this repo is intented to be run on helper node |
| 116 | +
|
| 117 | +In `ansible.cfg` have the following as the content, as we will be running this as `root` user on helper node. |
| 118 | +``` |
| 119 | +[defaults] |
| 120 | +fact_caching = jsonfile |
| 121 | +fact_caching_connection = /tmp |
| 122 | +host_key_checking = False |
| 123 | +remote_user = root |
| 124 | +``` |
| 125 | +In [group_vars/all.yml](group_vars/all.yml)'s registry dict, with rest being optional, the following must be changed: |
| 126 | + * All IPs should now reflect the new subnet including |
| 127 | + * helper_vm_ip (the new IP obtained under the new subnet) |
| 128 | + * All IPs for bootstrap, masters, workers |
| 129 | + * static_ip.gateway |
| 130 | + * `registry.host` should be pointed to the IP or FQDN of the host mentioned in the previous step. If reusing the helper then use `helper.ocp4.example.com` else use (for example) `registry.ocp4.example.com` |
| 131 | + * `registry.product_release_version` must be updated to the latest version of the container image. _(Use [documentation links](#documentation-links))_ |
| 132 | + * `vcenter.network` with the name of the new virtual switch port-group as we want all the new VMs land on the newly created virtual switch |
| 133 | + |
| 134 | +### Installation in a restricted network |
| 135 | + |
| 136 | +Now that helper, registry and the automation configs are all set, lets run the installation with the command: |
| 137 | + |
| 138 | +```sh |
| 139 | +# If vCenter folders exist |
| 140 | +ansible-playbook --flush-cache -i staging restricted_ova.yml -e vcenter_preqs_met=true |
| 141 | + |
| 142 | +# If vCenter folders DONT exist |
| 143 | +ansible-playbook --flush-cache -i staging restricted_ova.yml |
| 144 | +``` |
| 145 | + |
| 146 | +The final network topology should somewhat like the image below: |
| 147 | +[](.images/virtual-switch-final.png) |
| 148 | + |
| 149 | +## Final Check |
| 150 | + |
| 151 | +To check if the registry information has been picked up run and command below on either kind of nodes or check the decoded contents of secret `pull-secret` in `openshift-config` when the cluster is operational |
| 152 | +```sh |
| 153 | +# On Master or Bootstrap |
| 154 | +cat /etc/containers/registries.conf |
| 155 | +``` |
| 156 | + |
| 157 | +### Things to watch out for |
| 158 | +1. The OLM is broken on the restricted install, see #4 link below |
| 159 | +2. You have to figure out how to get traffic into the cluster, relying on the DNS of helper won't help as it is on a different subnet with no internet access. I use `dnsmasq` to route any traffic to `example.com` domain to public/accessible IP of the helper node |
| 160 | + |
| 161 | + |
| 162 | +## Documentation Links |
| 163 | +1. [Create a mirror registry for installation in a restricted network](https://docs.openshift.com/container-platform/4.4/installing/install_config/installing-restricted-networks-preparations.html) |
| 164 | +2. [Installing a cluster on vSphere in a restricted network](https://docs.openshift.com/container-platform/4.4/installing/installing_vsphere/installing-restricted-networks-vsphere.html) |
| 165 | +3. https://www.openshift.com/blog/openshift-4-2-disconnected-install |
| 166 | +4. [Using Operator Lifecycle Manager on restricted networks](https://docs.openshift.com/container-platform/4.4/operators/olm-restricted-networks.html) |
0 commit comments