EKS Anywhere., & the default storage class (VMware CSI/CNS)

Ambar Hassani
9 min readMay 9, 2022

This article is part of the series EKS Anywhere, extending the Hybrid cloud momentum | by Ambar Hassani | Apr, 2022 | Medium

EKS Anywhere clusters when deployed., ship with a default storage class that is based on VMware’s vSphere CNS/CSI (Cloud Native Storage). The main goal of CNS is to enable vSphere and vSphere storage, including vSAN, as a platform to run persistent stateful Kubernetes workloads.

As a default choice, this may be a viable consideration when customers are consolidating all their resources via Hyper Converged Infrastructure like VxRail or alike. And to that note, you may not wish to host any other separate physical infrastructure for external storage.

In such cases, the use-cases might require persistence that can be satisfied using an abstracted Cloud Native Storage via VMware’s CNS and CSI implementation.

Image credit: Feel the AWS Kubernetes love — Dell EMC adds EKS Anywhere to VxRail using VMware on-ramp — Blocks and Files

In a nutshell, if you have deployed EKS Anywhere on a Hyper Converged Infrastructure, you can start spinning up persistent volumes without installing vendor specific CSI drivers and associated dependencies.

It may however be true that certain workloads may or would like to consume an external storage array via iSCSI or NFS for highly performant needs or IO intensive scales. In addition, if you find the existing limitations in VMware CNS a nuisance, then you still have the option to deploy non-CNS based persistent volumes based on individual vendor’s implementation of the CSI.

For now, let’s just observe what is the default shipped standard storage class…

As you can see from the above outputs, the storageClass is associated with vSAN Default Storage Policy. You can read more on the vSAN Default Storage Policy here About the vSAN Default Storage Policy (vmware.com)

If you have a vSAN implementation, then nothing else is required to spin up the persistent volumes via the respective claims that are defined in your cluster.

VMFS Datastores (Non-VSAN)

Now in my case, I do not have a VSAN datastore. In other words, I do not have a compatible datastore that be associated with the “vSAN Default Storage Policy”.

Although the VMware CNS CSI does support VMFS, however it cannot be done via the “vSAN Default Storage Policy” referenced in the default shipped standard storage class.

So., what’s the resolve? There is no need to worry as the same CNS CSI can be leveraged by altering the default storage class. We have to do some additional configurations by creating a new storage policy in vCenter and associating it with the compatible datastore via a tag based mechanism.

To do so., simply follow the below procedure

To create a storage policy for local storage, apply a tag to the storage and create a storage policy based on the tag as follows:

  1. From the top-level vSphere menu, select Tags & Custom Attributes
  2. In the Tags pane, select Categories and click New.
  3. Enter a category name, such as eksa. Use the checkboxes to associate it with Datacenter and the storage objects, Folder and Datastore. Click Create.
  4. From the top-level Storage view, select your VMFS volume, and in its Summary pane, click Tags > Assign….
  5. From the Assign Tag popup, click Add Tag.
  6. From the Create Tag popup, give the tag a name, such as eksa and assign it the Category you created. Click OK.
  7. From Assign Tag, select the tag and click Assign.
  8. From top-level vSphere, select VM Storage Policies > Create a Storage Policy. A configuration wizard starts.
  9. In the Name and description pane, enter a name for your storage policy. Record the storage policy name for reference as the storagePolicyName value in StorageClass objects.
  10. In the Policy structure pane, under Datastore specific rules, select Enable tag-based placement rules.
  11. In the Tag based placement pane, click Add Tag Rule and configure:
  • Tag category: Select your category name
  • Usage option: Use storage tagged with
  • Tags: Browse and select your tag name

Confirm and configure other panes or accept defaults as needed, then click Review and finish. Finish to create the storage policy.

The below snapshots validate the above procedure

Click on new category and enter as shown below to save it.

Switch to Tags and click on new., add a tag as shown below selecting the above created category

Navigate to Top-level menu > storage and select your VMFS datastore. In my case it’s called as CommonDS

Scroll down to the Tags section for the datastore and click on “assign”. Select the above created tag and click on assign to save.

Navigate through the top-menu > policies and profiles > VM storage policies > Create VM storage policy

Click next and in the policy structure select the option “enable tag based placement rules”

Under Rule-1 select the category and browse for the tag that we created earlier

The compatible datastore/s are listed. In my case there is just one “CommonDS”., so I click on next

Review and Finish to complete the procedure at the vCenter end.

Once done, we move over to our EKS Anywhere Administrative machine and set the kubectl context accordingly to target our testwk01 workload cluster

source /home/ubuntu/eks-anyhwere/cluster-ops/switch-cluster.sh
clusterName: testwk01

Next we will delete the default storage class

kubectl delete sc standard

And then apply a new YAML file to recreate a new default storage class. This file called vmfs-default-storage-class.yaml is already placed inside of the /home/ubuntu/eks-anywhere/vmfs-persistence sub-directory. You can see it below and notice the presence of storagePolicyName as “eksa”. This is the same name as we have defined it in the above steps. Now every time a persistent volume claim is raised for the standard storage class, it will leverage this “eksa” storagePolicyName in vSphere, which in turn is associated via the tag based mechanism with my VMFS data store named “CommonDS”

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: standard
parameters:
storagePolicyName: eksa
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

The above is just for reference. Execute the below command to create the new standard storage class that is also referenced as default.

cd /home/ubuntu/eks-anywhere/vmfs-persistencekubectl create -f vmfs-default-storage-class.yaml
storageclass.storage.k8s.io/standard created
kubectl describe sc standard
Name: standard
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: csi.vsphere.vmware.com
Parameters: storagePolicyName=eksa
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>

Next let’s create a Persistent volume claim to test the default storage class for VMFS datastore. A sample pvc YAML file named demo-pvc-vmfs.yaml has already been placed inside the same sub-directory of vmfs-persistence

cd /home/ubuntu/eks-anywhere/vmfs-persistencekubectl create -f demo-pvc-vmfs.yaml
persistentvolumeclaim/demo-pvc-vmfs created
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-pvc-vmfs Bound pvc-9908630b-dd17-41ce-991b-160d4f8622a6 2Gi RWO standard 4s
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-9908630b-dd17-41ce-991b-160d4f8622a6 2Gi RWO Delete Bound default/demo-pvc-vmfs standard 24s
kubectl describe pvc
Name: demo-pvc-vmfs
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-9908630b-dd17-41ce-991b-160d4f8622a6
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: demo-pod-vmfs-persistence
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 6m36s (x2 over 6m36s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator
Normal Provisioning 6m36s csi.vsphere.vmware.com_vsphere-csi-controller-65d86cf8d7-4xfmw_b35a3648-9e49-45ed-973e-7b0876c010f5 External provisioner is provisioning volume for claim "default/demo-pvc-vmfs"
Normal ProvisioningSucceeded 6m34s csi.vsphere.vmware.com_vsphere-csi-controller-65d86cf8d7-4xfmw_b35a3648-9e49-45ed-973e-7b0876c010f5 Successfully provisioned volume pvc-9908630b-dd17-41ce-991b-160d4f8622a6
kubectl describe pv
Name: pvc-9908630b-dd17-41ce-991b-160d4f8622a6
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection external-attacher/csi-vsphere-vmware-com]
StorageClass: standard
Status: Bound
Claim: default/demo-pvc-vmfs
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
FSType: ext4
VolumeHandle: 3e530339-2b13-4226-aaa7-de58a0f18cbf
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1652072029164-8081-csi.vsphere.vmware.com
type=vSphere CNS Block Volume
Events: <none>

As you can see the persistent volume has been created through the support of the standard & default storage class, which is now based on the VMFS datastore. The volume creation as seen from the above output is done via the provisioner of csi.vsphere.vmware.com

You can use this Persistent volume for a test POD by leveraging a deployment file named demo-pod-vmfs-persistence.yaml placed in the same sub-directory of vmfs-persistence

A snapshot of this file referencing the persistent volume claim named demo-pvc-vmfs is shown below

apiVersion: v1
kind: Pod
metadata:
name: demo-pod-vmfs-persistence
spec:
containers:
- name: busybox
image: "k8s.gcr.io/busybox"
volumeMounts:
- name: demo-vol
mountPath: "/demo"
command: [ "sleep", "1000000" ]
volumes:
- name: demo-vol
persistentVolumeClaim:
claimName: demo-pvc-vmfs

Simply apply the deployment YAML for this demo POD

cd /home/ubuntu/eks-anywhere/vmfs-persistencekubectl apply -f demo-pod-vmfs-persistence.yamlkubectl get pods
NAME READY STATUS RESTARTS AGE
demo-pod-vmfs-persistence 1/1 Running 0 26s
kubectl describe pod demo-pod-vmfs-persistence
Name: demo-pod-vmfs-persistence
Namespace: default
Priority: 0
Node: testworkload01-md-0-79cc6b47bf-g6s7n/172.24.167.77
Start Time: Mon, 09 May 2022 17:13:21 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 192.168.2.213
IPs:
IP: 192.168.2.213
Containers:
busybox:
Container ID: containerd://521d4fc2c018d80299b20bca10c0a48fb8fc44be8cb319cc6afedc9f176fdef7
Image: k8s.gcr.io/busybox
Image ID: sha256:36a4dca0fe6fb2a5133dc11a6c8907a97aea122613fa3e98be033959a0821a1f
Port: <none>
Host Port: <none>
Command:
sleep
1000000
State: Running
Started: Mon, 09 May 2022 17:13:44 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/demo from demo-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k8nt5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
demo-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: demo-pvc-vmfs
ReadOnly: false
kube-api-access-k8nt5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m52s default-scheduler Successfully assigned default/demo-pod-vmfs-persistence to testworkload01-md-0-79cc6b47bf-g6s7n
Normal SuccessfulAttachVolume 6m49s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9908630b-dd17-41ce-991b-160d4f8622a6"
Normal Pulling 6m34s kubelet Pulling image "k8s.gcr.io/busybox"
Normal Pulled 6m29s kubelet Successfully pulled image "k8s.gcr.io/busybox" in 4.503541247s
Normal Created 6m29s kubelet Created container busybox
Normal Started 6m29s kubelet Started container busybox

As you can see from the above highlighted text, the POD’s volume has been mounted via the persistent volume that was created above.

That’s it for now! Hopefully this brings more light to how the default shipped CSI and storage class has been implemented in EKS Anywhere Clusters. In addition, you would have also noted of how to steer it in case of a VMFS based datastore!

cheers,

Ambar@thecloudgarage

#iwork4dell

--

--

Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies