EKS Anywhere, customizing ubuntu templates for specific requirements

Ambar Hassani
7 min readOct 27, 2022

--

This article is part of the series EKS Anywhere, extending the Hybrid cloud momentum | by Ambar Hassani | Apr, 2022 | Medium

MOTIVATION FOR THIS ARTICLE

In the previous article of this saga series, we have created baseline ubuntu templates for specific Kubernetes versions using the image-builder process. Now, there are umpteen number of reasons as to why one would like to further customize the ubuntu template. In my case, the customizations include:

  • Installing custom SSL certs
  • iSCSI client
  • various OS packages
  • SDC client for Dell PowerFlex storage

WHAT’S THE CHALLENGE?

The official documentation on EKS Anywhere website did not work for me and appears to be flawed. While that said, I have raised a defect with AWS on their GitHub project Customize Ubuntu OVA workflow is broken after Self creation of Ubuntu images · Issue #3840 · aws/eks…

Alternatively, this article proof-points the exact steps to be executed while building customized ubuntu templates.

IMPORTANT NOTE: Although this procedure has been documented for customizing our ubuntu template created earlier to support Kubernetes version 1.21, the same can be followed to customize the other templates created for higher Kubernetes versions (1.22, 1.23)

HOW DO WE START

The below video narrates the exact steps to create the customizations in the ubuntu OS template that we created in the previous article

You should already have a baseline ubuntu OS template created in the previous article, e.g., ubuntu-2004-kube-v1.21 or similar based on the Kubernetes version

Also, ensure that the KeyCloak server is already created as per this article such that we can retrieve the self-signed certificate.

STEP-1 SSH into EKS Anywhere administrative machine as image-builder user

Once logged in as an image-builder user and use the below export statements (change the data center name for your vSphere)

Please change the template name to the one that needs to be customized

export vsphere_datacenter=IAC-SSC
export OLD_TEMPLATE_NAME=ubuntu-2004-kube-v1.21
export NEW_TEMPLATE_NAME=ubuntu-2004-kube-v1.21-custom

Note that govc is already installed in the image-builder user profile along with the vsphere connection attributes. In a nutshell, you can simply execute the below command after changing the Storage and Network attributes as per your environment.

govc vm.clone -on=false -vm=/$vsphere_datacenter/vm/Templates/${OLD_TEMPLATE_NAME} -folder=/$vsphere_datacenter/vm/Templates -ds=CommonDS -net=iac_pg ${NEW_TEMPLATE_NAME}

Create a SSH key to support a temporary user in the customization virtual machine via cloud-init

cd $HOME
rm -rf .ssh/custom-image-builder*
ssh-keygen -t rsa -f ~/.ssh/custom-image-builder -C "custom-image-builder"

We will now create two files, metadata.yaml and userdata.yaml to support the cloud-init method for ubuntu customization

metadata.yaml

cd $HOME
cat > $HOME/metadata.yaml <<EOF
instance-id: cloud-vm
EOF

userdata.yaml

cd $HOME
cat > $HOME/userdata.yaml <<EOF
#cloud-config
users:
- default
- name: custom-image-builder
primary_group: custom-image-builder
sudo: ALL=(ALL) NOPASSWD:ALL
groups: sudo, wheel
ssh_import_id: None
lock_passwd: true
ssh_authorized_keys:
- sshpublickey_value
EOF

Run the below command to output the SSH public key created in the above step

sshpublickey=$(cat .ssh/custom-image-builder.pub)
echo $sshpublickey
echo " - $sshpublickey" >> $HOME/userdata.yaml

Once the userdata.yaml has the correct public key value, execute the below command to set the metadata and userdata for the cloned virtual machine

cd $HOME
export METADATA=$(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) \
USERDATA=$(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })

govc vm.change -vm "${NEW_TEMPLATE_NAME}" \
-e guestinfo.metadata="${METADATA}" \
-e guestinfo.metadata.encoding="gzip+base64" \
-e guestinfo.userdata="${USERDATA}" \
-e guestinfo.userdata.encoding="gzip+base64"

Power on the cloned virtual machine and get it’s IP address

govc vm.power -on ${NEW_TEMPLATE_NAME}
govc vm.info ${NEW_TEMPLATE_NAME}
Note: the IP address will take some time to appear for the second command

Once we have the IP address, SSH into the cloned virtual machine using the private key and custom-image-builder as the username.

cd $HOME
NEW_TEMPLATE_VM_IP=$(govc vm.info ${NEW_TEMPLATE_NAME} | grep -oP '(?<=IP address: ).*')

cd $HOME
ssh -i .ssh/custom-image-builder custom-image-builder@$NEW_TEMPLATE_VM_IP

STEP-2 Let the customizations BEGIN

Now that we are SSH’d into the cloned virtual machine successfully, we can start deploying additionally packages or any other required alterations such as ssl-certs, etc.

In this case, we will install iSCSI and tree packages and also deploy the KeyCloak self-signed certificate in the trust store

Install packages (I needed packages specifically for iSCSI, PowerFlex SDC and some additional ones., the list is as per your requirement)

sudo apt update -y
sudo apt install open-iscsi tree libaio1 linux-image-5.15.0-105-generic linux-headers-5.15.0-105-generic linux-image-extra-virtual libnuma1 uuid-runtime nano sshpass unzip gcc make -y
sudo systemctl enable --now iscsid

My KeyCloak server is already deployed as per this article and running at the FQDN keycloak.thecloudgarage.com. The below commands will scan the certificate and create a .crt file to be placed in the trust store of the cloned virtual machine.

In addition to this I am also supplying a common wildcard cert to be a part of the trust store.

cd /usr/local/share/ca-certificates
cat <<EOF > thecloudgarage.crt
-----BEGIN CERTIFICATE-----
MIID4jCCAsqgAwIBAgIUXtFYp7Iq2sGhKdpX0wbWCTemlGUwDQYJKoZIhvcNAQEL
BQAwcDELMAkGA1UEBhMCSU4xCzAJBgNVBAgMAk1IMQ8wDQYDVQQHDAZNdW1iYWkx
DjAMBgNVBAoMBXN0YWNrMQ8wDQYDVQQLDAZkZXZvcHMxIjAgBgNVBAMMGSoub2lk
Yy50aGVjbG91ZGdhcmFnZS5jb20wHhcNMjMwNjA1MTAxMTQ3WhcNMjUwNjA0MTAx
MTQ3WjBwMQswCQYDVQQGEwJJTjELMAkGA1UECAwCTUgxDzANBgNVBAcMBk11bWJh
aTEOMAwGA1UECgwFc3RhY2sxDzANBgNVBAsMBmRldm9wczEiMCAGA1UEAwwZKi5v
aWRjLnRoZWNsb3VkZ2FyYWdlLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC
AQoCggEBANfTqtO7MQhwneQfVa0GKMTEP0N9340r8vANvjxMMTbTOxXjQwwEo+w2
9bYBual6AoOsXwvcKDUT2lTpQArrdomwI0llKBpbugid922XPtx7GauXYeE2D/oV
hvEzEB1VC5pmc0JjH3P/PKiWJp5AmB7x6sYKQ4y4k56xYrlYw7LEV81pPlh7dvpA
+vzdYOXTUGb/9rLHbuTHwOW9+x6ezg/gHuWjLtHKCzF8/bbU941a6wLvNpvoYhRE
34eXXCwdUHEU9Ux6HWX7DIQ1mAqyCRS9etqQSpCO4cuh3t5u48zC8hY/iWe8Zm7y
RTe7Wf0lXY7ONizWKyJaHgs9CNLRt1sCAwEAAaN0MHIwCwYDVR0PBAQDAgWgMBMG
A1UdJQQMMAoGCCsGAQUFBwMBME4GA1UdEQRHMEWCEnRoZWNsb3VkZ2FyYWdlLmNv
bYIUKi50aGVjbG91ZGdhcmFnZS5jb22CGSoub2lkYy50aGVjbG91ZGdhcmFnZS5j
b20wDQYJKoZIhvcNAQELBQADggEBAAimXk4BfIeVEUQl2zBdsSF54cN2I6hQpqg5
YNSbvwlZ9oKTZ7IcKDQqpfNp3nFyo7K4uHftjnQhJDb/o2ZmmgzBPf1PQXp+/oL8
pISBqltBEFlDR9CMgulVInVy5+CCZMN2P66RTvGvmRl9gLrKhuMdoRm7Equ7jDIK
xbKXRbrRCH0EMYMG3hrLntFZ9Oj0MI9/Vn6jiM9J/e7ZLJY2HydWAole14Pm7Dn6
nSgi7JcL6A9KnKrLf+MMtQAp8Yhb9smhAGbU/ZEGvyyzbeEbbvuFXVpizvPN/mg0
ETuo/amuh+Dr89yv4eenn/fZH6+mAngZz5KusLUGWDRNT8Nba/I=
-----END CERTIFICATE-----
EOF
export fqdnOfKeycloakServer=keycloak.thecloudgarage.com
sudo echo -n | openssl s_client -connect $fqdnOfKeycloakServer:443 -servername $fqdnOfKeycloakServer \
| openssl x509 > $HOME/$fqdnOfKeycloakServer.crt
cat $HOME/$fqdnOfKeycloakServer.crt > $HOME/keycloak.crt
sudo cp $HOME/keycloak.crt /usr/local/share/ca-certificates && sudo update-ca-certificates

Next, we need to ensure that the virtual machine template does not create duplicate iqn for iSCSI client. This is a trickier part. So as a workaround, we will develop a service that will render an iqn unique to the hostname whenever eks anywhere creates a cluster node with this template.

In order to do so, we will follow the below procedure while still being logged in an image-user

sudo su
nano /etc/iscsi/set_iscsi_initiator.sh

# Insert the below content in the bash script and save.

#!/bin/bash
echo -e InitiatorName=iqn.1993-08.org.debian:01:$(hostname) > /etc/iscsi/initiatorname.iscsi
systemctl restart iscsid

# Make the script executable
chmod +x /etc/iscsi/set_iscsi_initiator.sh

# Next create a service by creating a file
cd /etc/systemd/system
nano set_iscsi_initiator.service

# Copy the below contents in the service file
[Unit]
Description=Set unique iqn for ISCSI
After=network.target

[Service]
ExecStart=/etc/iscsi/set_iscsi_initiator.sh
Restart=on-failure
User=root
Group=root
Type=simple

[Install]
WantedBy=multi-user.target

# Next, run the below commands to set it up as a service that will run upon start and reboots

sudo systemctl daemon-reload
sudo systemctl enable set_iscsi_initiator.service
sudo systemctl start set_iscsi_initiator.service
With the above two steps, the required customizations for my two use-cases are fulfilled

Next, I need to update the linux kernel to accomodate my SDC client for PowerFlex storage

uname -r
grep -A100 submenu /boot/grub/grub.cfg |grep menuentry
grep 'menuentry \|submenu ' /boot/grub/grub.cfg | cut -f2 -d "'"

cp /etc/default/grub /etc/default/grub.backup

sed -i \
s/GRUB_DEFAULT=0/GRUB_DEFAULT='"Advanced options for Ubuntu>Ubuntu, with Linux 5.15.0-105-generic"'/g \
/etc/default/grub

sudo update-grub
reboot

At this stage, we will be kicked out upon reboot. Once the OS is back, SSH into the VM and execute the below to create

### This uses auto-driver-sync for PowerFlex Ubuntu SDC ###
### ref: https://www.youtube.com/watch?v=sSUDe6o6pDY ###

sudo su
wget https://pflex-packages.s3.eu-west-1.amazonaws.com/pflex-45/Software_Only_Complete_4.5.2_135/PowerFlex_4.5.2000.135_SDCs_for_manual_install.zip
unzip PowerFlex_4.5.2000.135_SDCs_for_manual_install.zip
cd PowerFlex_4.5.2000.135_SDCs_for_manual_install
unzip PowerFlex_4.5.2000.135_Ubuntu20.04_SDC.zip
cd PowerFlex_4.5.2000.135_Ubuntu20.04_SDC
tar -xvf EMC-ScaleIO-sdc-4.5-2000.135.Ubuntu.20.04.4.x86_64.tar
./siob_extract EMC-ScaleIO-sdc-4.5-2000.135.Ubuntu.20.04.4.x86_64.siob
MDM_IP=10.204.111.111,10.204.111.112,10.204.111.113 dpkg -i EMC-ScaleIO-sdc-4.5-2000.135.Ubuntu.20.04.4.x86_64.deb

touch /etc/emc/scaleio/scini_sync/.build_scini
systemctl restart scini

###

Verify the MDMs
/opt/emc/scaleio/sdc/bin/drv_cfg --query_mdms

Example output:
Retrieved 1 mdm(s)
MDM-ID 065519bf7107ee0f SDC ID 6f804edf0000000b INSTALLATION ID 42b7d23a1d40b26e IPs [0]-10.204.111.111 [1]-10.204.111.112 [2]-10.204.111.113

###

sudo su
cat <<EOF > /etc/emc/scaleio/set_scini_initiator.sh
#!/bin/bash
if ls -al /etc/emc/scaleio | grep scini_test.txt; then
systemctl restart scini
exit
else
echo -e "test" > /etc/emc/scaleio/scini_test.txt
export uuid=\$(uuidgen)
# PLEASE CHANGE THE IP ADDRESSES OF THE MDM SERVERS
echo -e "ini_guid \$uuid\nmdm 10.204.111.111,10.204.111.112,10.204.111.113" > /etc/emc/scaleio/drv_cfg.txt
systemctl restart scini
fi
EOF

# Make the script executable
chmod +x /etc/emc/scaleio/set_scini_initiator.sh

# Next create a service by creating a file
cd /etc/systemd/system/
cat <<EOF > set_scini_initiator.service
# Copy the below contents in the service file
[Unit]
Description=Set unique uuid for powerflex sdc
After=network.target

[Service]
ExecStart=/etc/emc/scaleio/set_scini_initiator.sh
Restart=on-failure
User=root
Group=root
Type=oneshot

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable set_scini_initiator.service
sudo systemctl start set_scini_initiator.service
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
sudo service ssh restart
sudo chpasswd <<<"root:ubuntu"
sudo chpasswd <<<"ubuntu:ubuntu"

That’s it the customizations are done.

STEP-3 CREATE THE TEMPLATE METHODICALLY

If this is not done properly, the customized ubuntu template will not work and EKS Anywhere cluster creation will fail. Follow the exact steps given below. Execute the commands as noted below, while still being logged in as image-user

sudo su
sudo rm -rf /etc/emc/scaleio/scini_test.txt
echo -n > /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id
cloud-init clean
cloud-init clean -l
rm -rf /etc/netplan/50-cloud-init.yaml
rm -rf /etc/hostname
touch /etc/hostname
sed -i 's/preserve_hostname: true/preserve_hostname: false/g' /etc/cloud/cloud.cfg

Once you have executed the commands, we will EXIT the SSH session from the cloned virtual machine and we should be back onto the EKS Anywhere administrative machine as an image-builder user

Execute the below commands on the EKS Anywhere administrative machine while logged in as an image-builder user

export vsphere_datacenter=VxRail-Datacenter
export OLD_TEMPLATE_NAME=ubuntu-2004-kube-v1.21
export NEW_TEMPLATE_NAME=ubuntu-2004-kube-v1.21-custom

govc vm.power -off ${NEW_TEMPLATE_NAME}
govc snapshot.create -vm ${NEW_TEMPLATE_NAME} root
govc vm.markastemplate ${NEW_TEMPLATE_NAME}
govc tags.attached.ls -r /$vsphere_datacenter/vm/Templates/${OLD_TEMPLATE_NAME}
govc tags.attach os:ubuntu /$vsphere_datacenter/vm/Templates/${NEW_TEMPLATE_NAME}
OLD_TEMPLATE_EKSD_RELEASE_TAG=$(govc tags.attached.ls -r /$vsphere_datacenter/vm/Templates/${OLD_TEMPLATE_NAME} | grep eksd)
govc tags.attach $OLD_TEMPLATE_EKSD_RELEASE_TAG /$vsphere_datacenter/vm/Templates/${NEW_TEMPLATE_NAME}
now1=$(date +'%F')
now2=$(date +'%H')
now3=$(date +'%M')
now=$now1-$now2-$now3
govc object.rename /$vsphere_datacenter/vm/Templates/${OLD_TEMPLATE_NAME} ${OLD_TEMPLATE_NAME}-deprecated-$now
govc object.rename /$vsphere_datacenter/vm/Templates/${NEW_TEMPLATE_NAME} ${OLD_TEMPLATE_NAME}
govc session.logout

Now we are all set to create the EKS Anywhere cluster with the customized ubuntu OS template.

Any further customizations once the new template is created have to undergo the same procedure again.

cheers

Ambar@thecloudgarage

#iwork4dell

--

--

Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies