EKS Anywhere., Part-1 DellEMC CSI for PowerScale
This article is part of the EKS Anywhere series EKS Anywhere, extending the Hybrid cloud momentum | by Ambar Hassani | Apr, 2022 | Medium
The article is a deep-dive and hence split into 2 parts. This is the Part-1 of the PowerScale CSI and EKS Anywhere article. Part-2 of this article can be found on this hyperlink
In this article, we are going to explore the implementation, integration and testing of Dell PowerScale CSI with EKS Anywhere.
PowerScale is Dell Technologies award winning1 and a revolutionary scale-out NAS solution. It is particularly designed with the extreme demands of AI/ML, Big Data, NAS use cases, etc. As customers begin to design or transition their critical file based stateful workloads to Kubernetes, the persistence layer offered by PowerScale CSI becomes a quintessential enabler for these use-cases.
As an introductory context, one can observe DellEMC’s CSI coverage CSI Drivers | Dell Technologies. In addition to the CSI implementations, DellEMC has also innovated and instrumented CSM (Container Storage Modules), which further heighten the range of experiences by adding authentication, authorization, observability, replication, etc.
Our use-case:
We will be leveraging an NFS based ReadWriteMany implementation pattern, where the persistence layer for our stateful workload is implemented over the Dell EMC PowerScale CSI.
The below visual represents a high-level summary of the same. The use case capabilities can be further extrapolated to suit file-sharing workloads such as CI-CD systems, Advanced Analytics, AI/ML, etc. that require high performance file sharing solutions on top of Kubernetes stack.
Goals:
- Implement PowerScale CSI drivers on EKS Anywhere cluster
- Implement snapshotting capabilities via external-snapshotter
- Deploy file sharing workload with RWX persistence layer delivered over PowerScale CSI
- Test various use-cases around persistence, volume expansion, snapshotting (backup & restore)
Pre-requisites:
- EKS Administrative machine is setup as per the earlier article
- One can do this on a new or an existing cluster either standalone or a dedicated management cluster mode
- A PowerScale cluster or OneFS simulator configured and ready to be used for CSI testing
Let’s begin
Assuming, I am starting with a fresh standalone workload cluster (you can have an existing one or create a new one). My EKS Anywhere cluster’s name will be eksa1 and the static IP for the API server will be 172.24.165.11
SSH into the EKS Anywhere administration and run the cluster creation script as shown below to create the standalone workload cluster
cd $HOME
source create-eksa-cluster.sh
#######################IMPORTANT NOTE#################################
keep the name of workload and management cluster EXACTLY THE SAME
in case of creating standlone workload clusters or management clusters
######################################################################
Workload cluster name: eksa1
Management cluster name: eksa1
staticIp for API server High Availability: 172.24.165.11
Kubernetes version 1.21, 1.22, 1.23, etc.: 1.23
EKS Anywhere Cluster creation log
Next switch the kubectl context to work with the standalone EKS Anywhere
cd $HOME
source eks-anywhere/cluster-ops/switch-cluster.sh
clusterName: eksa1
eksa1
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* eksa1-admin@eksa1 eksa1 eksa1-admin
As one can see from the below kubectl output for my EKS Anywhere workload cluster, AWS EKS Anywhere ships with the standard storage class which is mapped to VMware’s CNS CSI. This default CSI is also covered in the other article EKS Anywhere., & the default storage class (VMware CSI/CNS) | by Ambar Hassani | May, 2022 | Medium
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) csi.vsphere.vmware.com Delete Immediate false 7d4h
Automating the entire CSI installation for PowerScale
A handy script has been created that automates the entire process. This includes:
- Deployment of PowerScale CSI driver on EKS Anywhere
- Deployment of External snapshotter
- Creation of Storage class for use with PowerScale CSI driver
You will need to gather the following details for your PowerScale cluster before proceeding: IP/FQDN, username, password and Powerscale Cluster name.
SSH into the EKS Administrative machine and execute the below script
cd $HOME
source eks-anywhere/powerscale/install-powerscale-csi-driver.sh
Enter EKSA Cluster Name on which CSI driver needs to be installed
clusterName: eksa1
Enter PowerScale CSI release version, e.g. 2.2.0, 2.3.0, 2.4.0, 2.5.0
csiReleaseNumber: 2.5.0
Enter PowerScale Cluster Name
powerScaleClusterName: powerscale01
Enter IP or FQDN of the PowerScale Cluster
ipOrFqdnOfPowerStoreArray: 172.24.165.41
Enter username of the PowerScale Cluster
userNameOfPowerScaleCluster: admin
Enter password of the PowerScale Cluster
passwordOfPowerScaleCluster:
A snip of the script installation log is shown below.
As seen from the below logs, CSI driver along with external snapshotter and storage class has been created on the EKS Anywhere cluster. As a part of the CSI driver installation the below pods are created in the csi-powerscale namespace
- ReplicaSet named isilon-controller (2 pods with 5 containers each namely attacher, provisioner, snapshotter, resizer, driver)
- DaemonSet named isilon-node (2 pods with 2 containers each namely driver, registrar)
The setup is completed and is ready to deploy NFS based workloads via PowerScale CSI ReadWriteMany mode
Additional Observations
One can perform additional observations for the various pods/containers and logs for the CSI artefacts by running the below commands.
pscalecontrollerpod1=$(kubectl get pods --selector=app=isilon-controller -n csi-powerscale -o=jsonpath='{.items[0].metadata.name}')
pscalecontrollerpod2=$(kubectl get pods --selector=app=isilon-controller -n csi-powerscale -o=jsonpath='{.items[1].metadata.name}')
You can also run the below commands using -c and changing the container name to any of the below mentioned 5 containers
>>> attacher, provisioner, snapshotter, resizer, driver
kubectl describe pod $pscalecontrollerpod1 -n csi-powerscale
kubectl describe pod $pscalecontrollerpod2 -n csi-powerscale
kubectl logs $pscalecontrollerpod1 -n csi-powerscale -c driver
kubectl logs $pscalecontrollerpod2 -n csi-powerscale -c driver
# powerscale NODE PODS
pscalenodepod1=$(kubectl get pods --selector=app=isilon-node -n csi-powerscale -o=jsonpath='{.items[0].metadata.name}')
pscalenodepod2=$(kubectl get pods --selector=app=isilon-node -n csi-powerscale -o=jsonpath='{.items[1].metadata.name}')
You can also run the below commands using -c and changing the container name to any of the below mentioned 5 containers
>>> driver, registrar
kubectl describe pod $pscalenodepod1 -n csi-powerscale
kubectl describe pod $pscalenodepod1 -n csi-powerscale
kubectl logs $pscalenodepod1 -n csi-powerscale -c driver
kubectl logs $pscalenodepod2 -n csi-powerscale -c driver
A detailed output for the above commands is captured in the below URL for greater comprehension and understanding
eksa-powerscale-csi-outputs.md (github.com)
OK, we are all done with implementing the PowerScale CSI on our EKS Anywhere cluster. With this step completed you can move over to the Part-2 where we will deploy and test use-cases leveraging the PowerScale CSI.
cheers,
Ambar@thecloudgarage
#iwork4dell