EKS Anywhere., Part-2 DellEMC CSI for PowerScale

Ambar Hassani
10 min readDec 30, 2022

--

This article is part of the EKS Anywhere series EKS Anywhere, extending the Hybrid cloud momentum | by Ambar Hassani | Apr, 2022 | Medium

This is the Part-2 of the PowerStore CSI and EKS Anywhere article. This part discusses the deployment of a sample persistent workload and testing various functionalities. Part-1 of this article can be found on this hyperlink

Recall our use-case from the Part-1 of this article. We will be leveraging a ReadWriteMany NFS based implementation pattern, where the persistence layer for our stateful workload is implemented over the Dell EMC PowerScale CSI. The below visual represents a high-level summary of the same

Let’s Begin

What are we going to test

  • ReadWriteMany support
  • Persistence (pod and deployment deletion)
  • Snapshots (backup/restore)
  • Volume Expansions via SmartQuotas

Note that all the YAML and bash scripts used in this example are already located in the $HOME/busybox/powerscale-rwx directory of the EKS Anywhere administrative machine. These would be created as a part of git cloning during the EKS Anywhere administrative machine creation process documented as a part of the saga series.

Recall we have the PowerScale CSI v2.5.0 deployed from the previous article

kubectl get pods --selector=app=isilon-controller -n csi-powerscale -o=jsonpath='{.items[0].status.containerStatuses[1].image}'

docker.io/dellemc/csi-isilon:v2.5.0

Let’s observe the initial state

Create the ReadWriteMany persistent volume via PowerScale CSI. The file exhibit for the PVC and the deployment is shown below. These files are already present on the EKS Anywhere Administrative machine while it was setup using Terraform.

cd $HOME
more $HOME/eks-anywhere/busybox/powerscale-rwx/busybox-rwx-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: busybox-rwx-pvc-powerscale
labels:
name: busybox-rwx-pvc-powerscale
csi: powerscale
spec:
storageClassName: powerscale
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi

kubectl create -f $HOME/eks-anywhere/busybox/powerscale-rwx/busybox-rwx-pvc.yaml

Observe the PVC and PV created via kubectl

Observe the NFS export created via PowerScale dashboard

Observe the RWX PVC created via PowerScale file explorer. Note that the persistent volumes created are intelligently prefixed with the EKS Anywhere cluster name for easier search and analysis

Create the Busybox deployment that will use the above created RWX persistent volume. The YAML file can be found here

cd $HOME
more $HOME/eks-anywhere/busybox/powerscale-rwx/busybox-rwx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-rwx
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 2
selector:
matchLabels:
run: busybox
template:
metadata:
labels:
run: busybox
spec:
containers:
- args:
- sh
image: busybox
imagePullPolicy: Always
name: busybox
stdin: true
tty: true
volumeMounts:
- name: busybox-rwx-pvc
mountPath: "/mnt1"
restartPolicy: Always
volumes:
- name: busybox-rwx-pvc
persistentVolumeClaim:
claimName: busybox-rwx-pvc-powerscale

kubectl create -f $HOME/eks-anywhere/busybox/powerscale-rwx/busybox-rwx.yaml

Observe the volume attachment for first pod

kubectl describe pod busybox-rwx-6578cd97f8-2k2bc

Observe the volume attachment for second pod

kubectl describe pod busybox-rwx-6578cd97f8-ssndz

Notice that there are no files in the PVC via the PowerScale File explorer

Connect to the shell of the first pod

kubectl attach busybox-rwx-6578cd97f8-2k2bc -c busybox -i -t

From the container’s console, list the disks to verify that the Persistent Volume is attached. Notice that the PVC is mounted as /mnt1 and available via 172.24.165.41:/ifs/data/csi/eksa1-vol-06d14b2f10

Create files in the mount from the first pod

Observe in PowerScale file explorer if the files can be viewed under the PVC path

Connect to the shell of the second pod

kubectl attach busybox-rwx-6578cd97f8-ssndz -c busybox -i -t

Verify that the file created from the other pod exists

Verify if we can write to the PVC from the second pod

Observe in PowerScale file explorer if all the 4 files can be viewed under the PVC path

Testing persistence for various scenarios

Persistence after Pod deletion

Delete both the busybox pods

for each in $(kubectl get pods --selector=run=busybox --no-headers -o custom-columns=":metadata.name");
do
kubectl delete pod $each
done

New pods should be running. Exec into any of new pod and verify if the previously created files still exist in the mount

Peristence upon pod deletion is confirmed as per the above visual

Persistence after recreating Deployment

Delete the existing busybox deployment

Recreate the busybox deployment

kubectl create -f $HOME/eks-anywhere/busybox/powerscale-rwx/busybox-rwx.yaml

New pods should be recreated. Exec into any of new pod and verify if the previously created files still exist in the mount

Above visual confirms persistence after recreating the busybox deployment

Volume Snapshots

Next, we move on to the snapshot capabilities introduced in Kubernetes via the external snapshotter GA project. In this scenario, we will observe creation of snapshots for the persistent volume created above.

Before we begin, let’s understand some key terms that are used in combination to create the snapshots

  • VolumeSnapshotClass (storage class for creating snapshots)
  • VolumeSnapshot (Snapshots that will target the above snapshot class)
  • VolumeSnapshotContent (The actual snapshot content)

The above concepts can be understood in further details available at Volume Snapshots | Kubernetes

And now on to the important files that are used for creating these snapshots

powerscale-volumesnapshotclass.yaml (template to create the volume snapshot class). As you can see this volume snapshot class leverages the powerscale csi driver to provision all the snapshots


cd $HOME
more eks-anywhere/powerscale/powerscale-volumesnapshotclass.yaml

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: powerscale-snapclass
driver: csi-isilon.dellemc.com

# Configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object
# it is bound to is to be deleted
# Allowed values:
# Delete: the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object.
# Retain: both the underlying snapshot and VolumeSnapshotContent remain.
deletionPolicy: Delete

parameters:
# The base path of the volumes on Isilon cluster for which snapshot is being created.
# This path should be same as the IsiPath from the storageClass.
# Optional: false
IsiPath: /ifs/data/csi

snapshot-sample.yaml (template for volume snapshots)

cd $HOME
more eks-anywhere/busybox/powerscale-rwx/snapshot-sample.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: busybox-rwx-snapshot-powerscale-datetime
labels:
name: busybox-rwx-snapshot-powerscale-datetime
spec:
volumeSnapshotClassName: powerscale-snapclass
source:
persistentVolumeClaimName: busybox-rwx-pvc-powerscale

create-snapshot.sh (handy script to create unique datetime based snapshots). The datetime text is automatically replaced by the below script to create unique snapshots. The template has two references

  • The volume snapshot class will be used to create the snapshots
  • The persistent volume claim with which the snapshot is associated

As you can see the below script will insert a unique datetime reference into the snapshot template and then use kubectl to create the unique snapshot

cd $HOME
more eks-anywhere/busybox/powerscale-rwx/create-snapshot.sh

#!/bin/bash
NOW=$(date "+%Y%m%d%H%M%S")
rm -rf $HOME/eks-anywhere/busybox/powerscale/snapshot.yaml
cp $HOME/eks-anywhere/busybox/powerscale/snapshot-sample.yaml \
$HOME/eks-anywhere/busybox/powerscale/snapshot.yaml
sed -i "s/datetime/$NOW/g" $HOME/eks-anywhere/busybox/powerscale/snapshot.yaml
kubectl create -f $HOME/eks-anywhere/busybox/powerscale/snapshot.yaml

Create the volume snapshot class and observe the outputs

kubectl create -f $HOME/eks-anywhere/powerscale/powerscale-volumesnapshotclass.yaml

Create the baseline snapshot that will include the 4 files created earlier via the PVC


source $HOME/eks-anywhere/busybox/powerscale-rwx/create-snapshot.sh

Once the snapshot is created, we can view the volumesnapshot and volumesnapshotcontent

kubectl get volumesnapshot --no-headers
kubectl get volumesnapshotcontent --no-headers

We can view the same below and observe the use of powerscale-snapclass that was created in the above step. Note that the powerscale-snapclass consumes the powerscale/isilon csi driver to create and store the snapshots on PowerScale cluster

We can view the snapshot created in the PowerScale console. Note that the snapshots taken are intelligently prefixed with the EKS Anywhere cluster name

Now that we have the snapshot, we can delete the original persistent volume and create a new persistent volume from the saved snapshot

Let’s delete the busybox deployment and the RWX persistent volume associated with it

Recreate the RWX persistent volume from the snapshot. To do so, we have a handy script and a sample YAML file that will automatically create the persistent volume by providing the snapshot name

cd $HOME
more $HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc-sample.yaml

# DO NOT CHANGE DATASOURCE NAME, AS IT IS SET AUTOMATICALLY VIA THE SCRIPT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: busybox-rwx-restored-pvc-powerscale
labels:
name: busybox-rwx-restored-pvc-powerscale
csi: powerscale
spec:
storageClassName: powerscale
dataSource:
name: volumeSnapshotName
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi


cd $HOME
more $HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc.sh

#!/bin/bash
read -p 'volumeSnapshotName: ' volumeSnapshotName
rm -rf $HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc.yaml
cp $HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc-sample.yaml \
$HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc.yaml
sed -i "s/volumeSnapshotName/$volumeSnapshotName/g" \
$HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc.yaml
kubectl create -f $HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc.yaml

Note the restore-pvc script takes in the volume snapshot name as a data source and builds a persistent volume named busybox-rwx-restored-pvc-powerscale via the powerscale csi

Let’s execute the script and create the restored persistent volume

source $HOME/eks-anywhere/busybox/powerscale-rwx/restore-pvc.sh

As seen in the above visual, the persistent volume is restored via the snapshot that was created using PowerScale CSI

Now we will redeploy the busybox pods by changing the name of the persistent volume claim in the deployment YAML. The name will be changed to the new persistent volume claim setup via the above snapshot restore step

 nano $HOME/eks-anywhere/busybox/powerscale-rwx/busybox-rwx.yaml

Save the YAML and apply it

The describe pod output shows that the restored persistent volume is successfully attached to the pods

Now let’s repeat our mount and file listing verification to ensure that data is persisted in the snapshot and the restored persistent volume

We can observe that the new/restored persistent volume is also seen in the PowerScale cluster along with the 4 files

Exec into each of the busybox pods and verify that the 4 files created in the original persistent volume are present

This verifies the backup and restore process along with the snapshot capabilities presented via the PowerScale CSI

Volume Expansion via SmartQuotas feature

PowerScale CSI has built in procedures to integrate with SmartQuotas feature. This allows for seamless volume expansions for new persistent volumes. Note that SmartQuotas feature has to be enabled on the PowerScale cluster before creating the persistent volume. In my case, the SmartQuotas feature was enabled right at the start of this blog exercise

The snapshot restored persistent volume as seen in the above section has been created with 1Gi capacity. We will increase the same to 2Gi.

cd $HOME
kubectl patch pvc busybox-rwx-restored-pvc-powerscale -p '{ "spec": { "resources": { "requests": { "storage": "2Gi" }}}}'

As we can see from the below visual the capacity of the volume has been increased from 1Gi to 2Gi

We can observe the same in PowerScale dashboard where a hard limit for the restored persistent volume has been set to 2Gi. This is done by the PowerScale CSI via grpc calls to the SmartQuote routines running on the PowerScale cluster. As we continue to expand the volume, the hard limits will continue to change in the SmartQuotas dashboard for the respective persistent volume

With this we come to a close for this article hoping that you have been able to comprehend the installation of PowerStore CSI, workload deployment, scenario testing, etc.

cheers,

Ambar@thecloudgarage

#iwork4dell

--

--

Ambar Hassani
Ambar Hassani

Written by Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies

No responses yet