EKS Anywhere., PART-1 Dell EMC Unity-XT CSI
This article is a part of the multi-part story of EKS Anywhere. Access it here EKS Anywhere, extending the Hybrid cloud momentum | by Ambar Hassani | Apr, 2022 | Medium
In this article, we are going to explore the implementation, integration and testing of Dell Unity-XT CSI with EKS Anywhere.
Versions used:
- Unity CSI version: 2.5.0
- External snapshotter: 5.0
- EKS Anywhere Kubernetes version 1.23
TThe article is a deep-dive and hence split into 2 parts. This is the Part-1 of the Unity-XT CSI and EKS Anywhere article. Part-2 of this article can be found on this hyperlink
Why., motivation for this article: Firstly…most of the public docs for EKS Anywhere leverages examples around the default VMware’s CNS (Cloud Native Storage) CSI. Second, the CSI documentation often refers to descriptive technicalities with lesser focus on an end-to-end validated scenario.
In this article, we will observe the simplicity of Dell EMC CSI installation with EKS Anywhere that can further enhance the performance and other related attributes of persistent workloads.
As a context, one can observe DellEMC’s CSI coverage CSI Drivers | Dell Technologies. In addition to the CSI implementations, DellEMC has also innovated and instrumented CSM (Container Storage Modules), which further heighten the range of experiences by adding authentication, authorization, observability, replication, etc.
As the context for CSI is now set, let’s begin our implementation use-case
Goals:
- Implement CSI drivers for Unity-XT on EKS Anywhere cluster
- Implement snapshotting capabilities via external-snapshotter
- Deploy a persistent MySQL workload with a web-frontend
- Test various use-cases around persistence, snapshotting (backup & restore)
As one can see from the below kubectl output for my EKS Anywhere workload cluster, AWS EKS Anywhere ships with the standard storage class which is mapped to VMware’s CNS CSI. This default CSI is also covered in the other article EKS Anywhere., & the default storage class (VMware CSI/CNS) | by Ambar Hassani | May, 2022 | Medium
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) csi.vsphere.vmware.com Delete Immediate false 7d4h
In the use-case example for this article, we will use Dell EMC Unity XT as the target NFS platform for our MySQL persistent workload in EKS Anywhere workload cluster. The persistent volume claims will leverage the CSI drivers of Dell EMC CSI version 2.2.0 to create respective volumes
Pre-requisites:
- A working EKS Anywhere cluster either standalone or with a dedicated management cluster. It does not matter which model you use to deploy the cluster., at the end of the day we need a working EKS Anywhere cluster. Also you can name the EKS Anywhere cluster as
testunitycsicluster01
just to be consistent with the below implementation example - As a part of the EKS Anywhere cluster installation, you would have also cloned my eks-anywhere git repository. All file and template references used in this article indicate the paths setup in relation to the same git cloned repository
- At-least one static IP address from the same range of EKS Anywhere cluster network. This will be used for the adminer web application that is exposed via MetalLB based load-balanced service
- A Unity XT array configured with storage pool, nas server, that will be used for persistent storage. Also note the Unique ArrayId, IP address, username, and password that will be used to integrate the CSI drivers.
Note: As per the cluster creation process you should have ear-marked a set of static IP addresses (for control plane and for load balancer services). We will need those static IP addresses in this configuration. I am using a single static IP
172.24.165.26 to 172.24.165.26
(yours could/will be different) to expose the adminer web application via MetalLB load-balancer
Step-1 Install Helm 3 and sshpass on EKS Administrative machine
cd /home/ubuntu
sudo apt-get update -y
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
sudo apt -y install sshpass
Step-2 Prepare the EKS Anywhere cluster environment for Unity XT CSI installation
Logon the EKS Administrative machine and follow the below steps
Target your EKS Anywhere workload cluster from the EKS Anywhere Administrative machine (This example assumes the cluster name is testunitycsicluster01)export CLUSTER_NAME=testunitycsicluster01
cd $HOME
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfigCreate an implementation of Kubernetes External Snapshotter, which is now official GAcd $HOME
git clone https://github.com/kubernetes-csi/external-snapshotter/
cd ./external-snapshotter
git checkout release-5.0Now, we can deploy the external-snapshotter objectskubectl kustomize client/config/crd | kubectl create -f -customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io createdkubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -serviceaccount/snapshot-controller created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
deployment.apps/snapshot-controller createdAt this stage, we can verify if the external snapshotter is and up and runningkubectl get pods -n kube-system -l app=snapshot-controllerNAME READY STATUS RESTARTS AGE
snapshot-controller-75fd799dc8-7hq5f 1/1 Running 0 7m18s
snapshot-controller-75fd799dc8-x9kkj 1/1 Running 0 7m18sCreate a new namespace for our Unity CSIkubectl create namespace unityNext we will download the required files for the Dell EMC Unity XT CSI implementationcd $HOME
git clone -b v2.5.0 https://github.com/dell/csi-unity.git
Navigate to the installer directorycd $HOME/csi-unity/dell-csi-helm-installerCreate a file named emptysecret.yaml with the below content in this directoryapiVersion: v1
kind: Secret
metadata:
name: unity-certs-0
namespace: unity
type: Opaque
data:
cert-0: ""Next issue the commandkubectl create -f emptysecret.yaml
secret/unity-certs-0 createdCreate another file called secret.yaml in this directory and paste the below replacing your Unity XT arrayId, username, password and endpoint IP address. storageArrayList:
- arrayId: "aaaaaaaaaaaaa"
username: "bbbbbbbbbbbb"
password: "cccccccccccc"
endpoint: "https://172.24.165.5/"
skipCertificateValidation: true
isDefault: trueNext create a secret from the above create filekubectl create secret generic unity-creds -n unity --from-file=config=secret.yamlsecret/unity-creds createdWe will now copy the sample values.yaml into our CSI installer directorycd $HOME/csi-unity/dell-csi-helm-installer
cp $HOME/csi-unity/helm/csi-unity/values.yaml myvalues.yamlNext we will modify the myvalues.yaml to insert some unique values so that the volumes can be rendered with relevant references of cluster name, etc., such that the volumes can be easily identifiedcd $HOME/csi-unity/dell-csi-helm-installersed -i "s/csivol/$CLUSTER_NAME-vol/g" myvalues.yaml
sed -i "s/csi-snap/$CLUSTER_NAME-snap/g" myvalues.yamlThe next steps will ensure that the kubernetes versions are correctly referenced in the Helm chartCopy the value of Kubernetes version installed on the EKS Anywhere cluster nodes. As you can see that in my case the version (highlighted below) is v1.21.9-eks-c9274eakubectl get nodes
NAME STATUS ROLES AGE VERSION
testunitycsicluster01-2fp59 Ready control-plane,master 114m v1.23.13-eks-6022eca
testunitycsicluster01-7r2cg Ready control-plane,master 112m v1.23.13-eks-6022eca
testunitycsicluster01-md-0-59f9568584-pklvw Ready <none> 112m v1.23.13-eks-6022eca
testunitycsicluster01-md-0-59f9568584-sfhf9 Ready <none> 112m v1.23.13-eks-6022ecaOne can also use the commandkubectl get nodes -o=jsonpath='{.items[0].status.nodeInfo.kubeletVersion}'Edit the Helm chart for the Unity XT CSI installationcd $HOME/csi-unity/helm/csi-unityls -al
total 24
drwxrwxr-x 3 ubuntu ubuntu 4096 May 19 12:59 .
drwxrwxr-x 3 ubuntu ubuntu 4096 May 19 12:59 ..
-rw-rw-r-- 1 ubuntu ubuntu 619 May 19 12:59 Chart.yaml
drwxrwxr-x 2 ubuntu ubuntu 4096 May 19 12:59 templates
-rw-rw-r-- 1 ubuntu ubuntu 6912 May 19 12:59 values.yamlAs you can see there is a file named Chart.yaml. Edit this file in your favorite editor and changed the highlighted kubeVersion with the kubernetes version noted for your EKS Anywhere clusterORIGINAL FILE CONTENTname: csi-unity
version: 2.5.0
appVersion: 2.5.0
kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
# If you are using a complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check instead
# WARNING: this version of the check will allow the use of alpha and beta versions, which is NOT SUPPORTED
# kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
description: |
Unity CSI (Container Storage Interface) driver Kubernetes
integration. This chart includes everything required to provision via CSI as
well as a Unity StorageClass.
keywords:
- csi
- storage
sources:
- https://github.com/dell/csi-unity
maintainers:
- name: DellEMC
CHANGED FILE CONTENT AFTER EDITING AND SAVING THE ABOVE FILEname: csi-unity
version: 2.5.0
appVersion: 2.5.0
kubeVersion: "v1.23.13-eks-6022eca"
# If you are using a complex K8s version like "v1.21.3-mirantis-1", use this kubeVersion check instead
# WARNING: this version of the check will allow the use of alpha and beta versions, which is NOT SUPPORTED
# kubeVersion: ">= 1.21.0-0 < 1.26.0-0"
description: |
Unity CSI (Container Storage Interface) driver Kubernetes
integration. This chart includes everything required to provision via CSI as
well as a Unity StorageClass.
keywords:
- csi
- storage
sources:
- https://github.com/dell/csi-unity
maintainers:
- name: DellEMC
Step-3 Installing Unity XT CSI
Before starting the installation, here is a quick snapshot of how my Unity-XT hosts dashboard looks like. As you can see there is no host that is present for our EKS Anywhere cluster testunitycsicluster01. What we can expect is that once the CSI is installed on the EKS Anywhere cluster, the worker nodes will automatically register with the target Unity-XT system
At this stage, all our pre-requisites are met and if no errors/mistakes were made in the above., then we can issue the below command to install the CSI drivers.
Note: The installation script would take a couple of minutes to finish everything that’s under the hood., so be patient
cd $HOME/csi-unity/dell-csi-helm-installer./csi-install.sh --namespace unity --values ./myvalues.yaml --skip-verify
Output log: The below log indicates that our installation went through smoothly (after being patient for 3–5 minutes)
./csi-install.sh --namespace unity --values ./myvalues.yaml --skip-verify --skip-verify-nodeWARNING: version difference between client (1.26) and server (1.23) exceeds the supported minor version skew of +/-1
WARNING: version difference between client (1.26) and server (1.23) exceeds the supported minor version skew of +/-1
------------------------------------------------------
> Installing CSI Driver: csi-unity on 1.23
------------------------------------------------------
------------------------------------------------------
> Checking to see if CSI Driver is already installed
------------------------------------------------------
Skipping verification at user request
|
|- Installing Driver Success
|
|--> Waiting for Deployment unity-controller to be ready Success
|
|--> Waiting for DaemonSet unity-node to be ready Success
------------------------------------------------------
> Operation complete
Once the CSI installation is successful, you can view the worker nodes automatically registered in the Unity-XT console’s host section. As you can observe the below snapshot, the worker nodes starting with prefix of testunitycsicluster01 are seen in the dashboard. Also note that since our use-case is NFS, we have not configured any iSCSI in the Kubernetes nodes., so zero initiators are seen against the host entries
Step-4 Observe the functional components of Unity XT CSI installation
As you can see from the below output, there are 2 controller and 2 node pods that are in RUNNING status. The count of controller and node pods is configurable in the myvalues.yaml
kubectl get pods -n unity
NAME READY STATUS RESTARTS AGE
unity-controller-7df987bf94-tbl8k 5/5 Running 0 29s
unity-controller-7df987bf94-zb8fb 5/5 Running 0 29s
unity-node-bm6gh 2/2 Running 1 29s
unity-node-dvfvd 2/2 Running 1 29s
Another thing to note would be that the controller pod has 5 containers (attacher, provisioner, snapshotter, resizer, driver) and node pod has 2 containers (driver, registrar) running that combined together create the necessary routines for Unity-XT CSI
You can issue the commands to get logs from individual containers
Example for controller podkubectl logs unity-controller-7df987bf94-tbl8k -n unity -c container-name-of-any-of-the-5-containersExample for node podkubectl logs unity-node-bm6gh -n unity -c container-name-of-any-of-the-2-containers
One of the interesting logs from the driver container of the node pod tells us how the routines are performed to register the worker nodes on the Unity-XT system
kubectl logs unity-node-bm6gh -n unity -c driver<snip>level=info msg="configured csi-unity.dellemc.com" ArrayId=virt2148drw2v6 Endpoint="https://172.24.165.5/" IsDefault=true SkipCertificateValidation=true password="*******" username=admintime="2022-05-19T15:34:02Z" level=info runid=node-0 msg="Starting goroutine to add Node information to storage array" func="github.com/dell/csi-unity/service.(*service).syncNodeInfoRoutine()" file="/go/src/csi-unity/service/node.go:1835"<snip>
OK, we are all done with implementing the Unity-XT CSI on our EKS Anywhere cluster. With this step completed you can move over to the Part-2 where we will deploy and test use-cases leveraging the Unity-XT CSI.
cheers,
Ambar@thecloudgarage