Integrating the behemoths., EKS Public in AWS and Dell APEX Block for AWS

Ambar Hassani
29 min readMar 4, 2024

Since the launch of APEX Block on AWS and my previous blog on the Cross-cloud Hybrid data plane based on PowerFlex Software Defined Storage., many have reached out to me to write a detailed blog on integrating EKS with APEX Block on AWS offering from Dell. Although, I won’t be going through the benefits of APEX Block on AWS., it’s worth putting out the resources that call it out specifically.

APEX Block Storage for Public Cloud | Dell USA

With all these benefits., it’s time to see how this powerful beast can be integrated with another beast, i.e. EKS running in AWS cloud.

Although, it’s a long’ish blog., however I wanted to ensure that no screenshot, detailing or log is spared allowing oneself to visualize an end-to-end implementation and use-case validation.

  • EKS k8s version used: 1.29
  • Ubuntu OS for worker nodes: 22.04
  • PowerFlex CSI driver for APEX Block Storage: 2.9.2

As such the actual implementation is pretty simple and straightforward that includes

  • Download and edit the template for the EKS cluster & node-group.
  • Create the cluster and node-group.
  • Deploy the CSI using a 20-second run script.

Let the games begin!

An end-to-end deployment and integration video is included as an experiential learning.

Video timeline:

  • 00:00 — Introduction and scenario context
  • 05:55 — Deploying a bare-bones EKS cluster
  • 11:51 — Deploying a EKS worker node group with PowerFlex Software Defined client
  • 30:39 — Deploying PowerFlex CSI drivers on EKS worker node group
  • 46:35 — Deploying and validating persistent workloads on EKS with APEX Block Storage in AWS

And if you want to deploy and experiment on your own., then here are the exact steps!

Create an empty directory for the EKS cluster template.

CLUSTER_NAME=eks-apex-block-test-1
mkdir -p $HOME/$CLUSTER_NAME

Download the EKS cluster template.

wget https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/powerflex/eks-cluster-templates/eks-cluster-template.yaml -P $HOME/$CLUSTER_NAME
mv $HOME/$CLUSTER_NAME/eks-cluster-template.yaml $HOME/$CLUSTER_NAME/$CLUSTER_NAME.yaml

This is a bare-bone template without any nodegroup. Once downloaded and saved with the cluster’s name, edit the template to change the Cluster’s name, Kubernetes version and other details such as VPC, Subnets and Security group as per your environment.

Once done, we can create the EKS bare-bones cluster with the below command.

eksctl create cluster -f $HOME/$CLUSTER_NAME/$CLUSTER_NAME.yaml --kubeconfig=$HOME/$CLUSTER_NAME/$CLUSTER_NAME-eks-cluster.kubeconfig

Below is the cluster creation log.

2024-03-03 14:39:01 [ℹ]  eksctl version 0.169.0-dev+1c8cc6244.2024-01-24T00:48:11Z
2024-03-03 14:39:01 [ℹ] using region eu-west-1
2024-03-03 14:39:02 [✔] using existing VPC (vpc-00bc8b021dafb7a92) and subnets (private:map[subnet1:{subnet-05b6a79635031102c eu-west-1a 172.26.2.0/26 0 } subnet2:{subnet-029718643080a4b86 eu-west-1b 172.26.2.64/26 0 } subnet3:{subnet-01c68677ddab29f0c eu-west-1c 172.26.2.128/26 0 }] public:map[])
2024-03-03 14:39:02 [!] custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2024-03-03 14:39:02 [ℹ] using Kubernetes version 1.29
2024-03-03 14:39:02 [ℹ] creating EKS cluster "eks-apex-block-test-1" in "eu-west-1" region with
2024-03-03 14:39:02 [ℹ] will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2024-03-03 14:39:02 [ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
2024-03-03 14:39:02 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=eks-apex-block-test-1'
2024-03-03 14:39:02 [ℹ] Kubernetes API endpoint access will use provided values {publicAccess=true, privateAccess=true} for cluster "eks-apex-block-test-1" in "eu-west-1"
2024-03-03 14:39:02 [ℹ] CloudWatch logging will not be enabled for cluster "eks-apex-block-test-1" in "eu-west-1"
2024-03-03 14:39:02 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-west-1 --cluster=eks-apex-block-test-1'
2024-03-03 14:39:02 [ℹ]
2 sequential tasks: { create cluster control plane "eks-apex-block-test-1", wait for control plane to become ready
}
2024-03-03 14:39:02 [ℹ] building cluster stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:39:02 [ℹ] deploying stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:39:32 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:40:02 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:41:03 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:42:03 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:43:03 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:44:03 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:45:04 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:46:04 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:47:04 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:48:05 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-cluster"
2024-03-03 14:50:06 [ℹ] waiting for the control plane to become ready
2024-03-03 14:50:06 [✔] saved kubeconfig as "/home/ubuntu/eks-apex-block-test-1/eks-apex-block-test-1-eks-cluster.kubeconfig"
2024-03-03 14:50:06 [ℹ] no tasks
2024-03-03 14:50:06 [✔] all EKS cluster resources for "eks-apex-block-test-1" have been created
2024-03-03 14:50:06 [ℹ] kubectl command should work with "/home/ubuntu/eks-apex-block-test-1/eks-apex-block-test-1-eks-cluster.kubeconfig", try 'kubectl --kubeconfig=/home/ubuntu/eks-apex-block-test-1/eks-apex-block-test-1-eks-cluster.kubeconfig get nodes'
2024-03-03 14:50:06 [✔] EKS cluster "eks-apex-block-test-1" in "eu-west-1" region is ready

The cluster can also be viewed from AWS console

Let’s set our Kubeconfig context to interact with the cluster

KUBECONFIG=$HOME/$CLUSTER_NAME/$CLUSTER_NAME-eks-cluster.kubeconfig
kubectl get nodes

Next, we will download and populate a EKS cluster nodegroup template. This nodegroup will host the APEX Block for AWS CSI drivers based on PowerFlex 4.5 Software Defined storage.

Most Importantly., the cluster nodegroup template are based on whether one is planning to deploy it on EKS kubernetes version 1.29 and above OR 1.28 and below. The difference purely is based on the fact that Canonical Ubuntu AMI for EKS for 1.28 and below is based on Ubuntu 20.04, whereas the AMI for EKS 1.29 and above is based on Ubuntu 22.04

Download the respective template based on the Kubernetes version you plan to run.

For EKS running Kubernetes 1.28 and below

wget https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/powerflex/eks-cluster-templates/eks-cluster-nodegroup-template-1-28-and-below.yaml -P $HOME/$CLUSTER_NAME
mv $HOME/$CLUSTER_NAME/eks-cluster-nodegroup-template-1-28-and-below.yaml $HOME/$CLUSTER_NAME/$CLUSTER_NAME-nodegroup.yaml

For EKS running Kubernetes 1.29 and above

wget https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/powerflex/eks-cluster-templates/eks-cluster-nodegroup-template-1-29-and-above.yaml -P $HOME/$CLUSTER_NAME
mv $HOME/$CLUSTER_NAME/eks-cluster-nodegroup-template-1-29-and-above.yaml $HOME/$CLUSTER_NAME/$CLUSTER_NAME-nodegroup.yaml

Once downloaded and saved., edit the node-group template to change the cluster name, region, instance-type, instance count and associated security-group.

Most importantly., the template has been specifically created for Ireland (eu-west-1) region. Please visit the URL https://cloud-images.ubuntu.com/docs/aws/eks/ and identify the correct AMI matching the Kubernetes version that is running in the cluster created above and the associated region.

* DO NOT CHANGE THE KEYNAME “GROUP” FOR THE NODE-GROUP LABEL. YOU CAN CHANGE THE VALUE FOR THE LABEL FROM md-0 TO SOMETHING ELSE. PREFERABLY LEAVE IT AS md-0

* DO NOT CHANGE ANY OTHER ATTRIBUTES IN THE OVERRIDE BOOTSTRAP COMMAND. ALL THOSE ARE REQUIRED FOR THE NODE GROUP WITH CUSTOM AMI TO FUNCTION AND POWERFLEX SDC INSTALLATION

Next, with the edited template in place, create the target EKS cluster node group.

eksctl create nodegroup --config-file=$CLUSTER_NAME/$CLUSTER_NAME-nodegroup.yaml

In my scenario, I have the node-group template based on the Cluster’s kubernetes version of 1.29 and running in Ireland region. Below is the log of the node-group creation.


eksctl create nodegroup --config-file=$CLUSTER_NAME/$CLUSTER_NAME-nodegroup.yaml

2024-03-04 06:08:46 [ℹ] will use version 1.29 for new nodegroup(s) based on control plane version
2024-03-04 06:08:47 [ℹ] nodegroup "md-0" will use "ami-0515d38cb0a9671bd" [Ubuntu2004/1.29]
2024-03-04 06:08:47 [ℹ] using EC2 key pair "eks-ssh"
2024-03-04 06:08:47 [ℹ] 1 nodegroup (md-0) was included (based on the include/exclude rules)
2024-03-04 06:08:47 [ℹ] will create a CloudFormation stack for each of 1 nodegroups in cluster "eks-apex-block-test-1"
2024-03-04 06:08:47 [ℹ]
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create nodegroup "md-0" } }
}
2024-03-04 06:08:47 [ℹ] checking cluster stack for missing resources
2024-03-04 06:08:47 [ℹ] cluster stack has all required resources
2024-03-04 06:08:47 [ℹ] building nodegroup stack "eksctl-eks-apex-block-test-1-nodegroup-md-0"
2024-03-04 06:08:47 [ℹ] --nodes-min=2 was set automatically for nodegroup md-0
2024-03-04 06:08:47 [ℹ] --nodes-max=2 was set automatically for nodegroup md-0
2024-03-04 06:08:47 [ℹ] deploying stack "eksctl-eks-apex-block-test-1-nodegroup-md-0"
2024-03-04 06:08:47 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-nodegroup-md-0"
2024-03-04 06:09:17 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-nodegroup-md-0"
2024-03-04 06:10:04 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-nodegroup-md-0"
2024-03-04 06:11:08 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-nodegroup-md-0"
2024-03-04 06:11:46 [ℹ] waiting for CloudFormation stack "eksctl-eks-apex-block-test-1-nodegroup-md-0"
2024-03-04 06:11:46 [ℹ] no tasks
2024-03-04 06:11:46 [ℹ] nodegroup "md-0" has 2 node(s)
2024-03-04 06:11:46 [ℹ] node "ip-172-26-2-157.eu-west-1.compute.internal" is ready
2024-03-04 06:11:46 [ℹ] node "ip-172-26-2-51.eu-west-1.compute.internal" is ready
2024-03-04 06:11:46 [ℹ] waiting for at least 2 node(s) to become ready in "md-0"
2024-03-04 06:11:46 [ℹ] nodegroup "md-0" has 2 node(s)
2024-03-04 06:11:46 [ℹ] node "ip-172-26-2-157.eu-west-1.compute.internal" is ready
2024-03-04 06:11:46 [ℹ] node "ip-172-26-2-51.eu-west-1.compute.internal" is ready
2024-03-04 06:11:46 [✔] created 1 nodegroup(s) in cluster "eks-apex-block-test-1"
2024-03-04 06:11:46 [✔] created 0 managed nodegroup(s) in cluster "eks-apex-block-test-1"
2024-03-04 06:11:46 [ℹ] checking security group configuration for all nodegroups
2024-03-04 06:11:46 [ℹ] all nodegroups have up-to-date cloudformation templates

Let’s now set the KUBECONFIG variable and retrieve the node information

KUBECONFIG=$HOME/$CLUSTER_NAME/$CLUSTER_NAME-eks-cluster.kubeconfig
kubectl get nodes

NAME STATUS ROLES AGE VERSION
ip-172-26-2-170.eu-west-1.compute.internal Ready <none> 3m48s v1.29.0
ip-172-26-2-97.eu-west-1.compute.internal Ready <none> 3m48s v1.29.0

As one can see that the nodes are running v1.29.0 version. The version naming convention is slightly different than a typical EKS output, as we are running a Canonical Ubuntu AMI for EKS nodes as opposed to the Bottlerocket AMI based on EKS Distro.

Let’s observe the nodes in the EKS console.

Next, we can ssh into each of the nodes and verify the PowerFlex 4.5 Software Defined Client installation, which was done as a part of the bootstrap script for node-group creation.

The default password for ubuntu and root user alike is “ubuntu”. This can be easily changed manually as required.

ssh ubuntu@172.26.2.157

The authenticity of host '172.26.2.157 (172.26.2.157)' can't be established.
ED25519 key fingerprint is SHA256:kPAGwUe7AmgmxKyCm0pVxv7bucJva9saW2A7Teo1U0k.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '172.26.2.157' (ED25519) to the list of known hosts.
ubuntu@172.26.2.157's password:
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 6.5.0-21-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro

This system has been minimized by removing packages and content that are
not required on a system that users do not log into.

To restore this content, you can run the 'unminimize' command.

Expanded Security Maintenance for Applications is not enabled.

30 updates can be applied immediately.
18 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

Once logged into the node-group instance as ubuntu, run the below command to retrieve and verify the connection to the APEX Block on AWS PowerFlex 4.5 MDM (MetaData Manager)

sudo ./get-powerflex-info.sh

System ID MDPM IPs
d7f6c6427c56ab0f 172.26.2.124 172.26.2.125 172.26.2.126

Repeat on the other instances in the node-group.

Make a note of the System ID and the MDM IP addresses as this value set is required for CSI driver installation and also optionally for peering to another PowerFlex system.

At this stage, we should be able to see our EKS cluster node-group instances as Software Defined Clients Hosts in the PowerFlex GUI that runs in the APEX Block on AWS. We can correspond to the IP addresses for the SDC hosts in the EC2 console as well.

All good up until now as the pre-requisites for CSI implementation are completed. We have an EKS cluster with a node-group that has the correct Software Defined client connectivity to the APEX Block on AWS cluster.

Next, we will deploy the APEX Block on AWS (PowerFlex 4.5) CSI drivers on the EKS node-group. In my case, the node-group is labeled with the key of “group” and value of “md-0”. This helps us CSI drivers on that specific node-group.

Generally speaking, Storage requirements that are fulfilled via CSI are also accompanied with other needs to deploy external-snapshotter, storage class, volume snapshot class. Moreover, in order to allieviate a typical manual process of deploy all of these with the CSI driver deployment (documented here), I have an automation built for deploying all of this. If the above node-group deployment is working as shown above, then just running the below script with the required variables should result in a successful deployment.

Note, as of this writing, the latest CSI driver version for PowerFlex is 2.9.2 and I am using the latest EKS kubernetes version of 2.9. The script will also work for older CSI driver versions starting 2.6.0 and above.

Ok., let’s get these off the chest and view the deployment. Retrieve the script and save it in the Cluster’s directory along with making it an executable.

wget https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/powerflex/install-powerflex-csi-driver-with-nodegroups.sh -P $HOME/$CLUSTER_NAME
chmod +x $HOME/$CLUSTER_NAME/install-powerflex-csi-driver-with-nodegroups.sh

Now, we will need a few variables noted up-front to be supplied as user-inputs to the script

  • EKS cluster name
  • Endpoint IP address of the APEX Block on AWS (Generally, the GUI IP address which is used to login to the PowerFlex console)
  • System ID (retrieved in the earlier step)
  • MDM IP addresses (retrieved in the earlier step)
  • Username for the APEX Block on AWS (The username used to login to the PowerFlex GUI running for APEX Block on AWS)
  • Password for the APEX Block on AWS (The password used to login to the PowerFlex GUI running for APEX Block on AWS)
  • Node-group label (that was supplied in the node-group template., mine is md-0)

As long as one has the above inputs, deploying the CSI driver is nothing more than a 20–30 seconds' job! A sample of how my script runs with the provided inputs.

$HOME/$CLUSTER_NAME/install-powerflex-csi-driver-with-nodegroups.sh
Enter Cluster Name on which CSI driver needs to be installed
clusterName: eks-apex-block-test-1
Enter PowerFlex CSI release version, e.g. 2.6.0, 2.7.0, 2.8.0, 2.9.1, 2.9.2
csiReleaseNumber: 2.9.2
Enter IP or FQDN of the powerflex cluster
ipOrFqdnOfPowerFlexCluster: 10.204.111.71
Enter Comma separated MDM IP addresses for powerflex cluster
ipAddressesOfMdmsForPowerFlexCluster: 172.26.2.124,172.26.2.125,172.26.2.126
Enter System Id of the powerflex cluster
systemIdOfPowerFlexCluster: d7f6c6427c56ab0f
Enter username of the powerflex cluster
userNameOfPowerFlexCluster: admin
Enter password of the powerflex cluster
passwordOfPowerFlexCluster: Enter Node Group name on which drivers will be installed, e.g. md-0
nodeSelectorGroupName: md-0

Let’s see the output of the script execution.

Cloning into 'external-snapshotter'...
remote: Enumerating objects: 58069, done.
remote: Counting objects: 100% (3832/3832), done.
remote: Compressing objects: 100% (1546/1546), done.
remote: Total 58069 (delta 2395), reused 3411 (delta 2152), pack-reused 54237
Receiving objects: 100% (58069/58069), 72.38 MiB | 24.29 MiB/s, done.
Resolving deltas: 100% (31199/31199), done.
Branch 'release-5.0' set up to track remote branch 'release-5.0' from 'origin'.
Switched to a new branch 'release-5.0'
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
deployment.apps/snapshot-controller created
namespace/vxflexos created
--2024-03-04 07:00:21-- https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/powerflex/secret.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 444 [text/plain]
Saving to: 'secret.yaml'

secret.yaml 100%[=====================================================================================================>] 444 --.-KB/s in 0s

2024-03-04 07:00:21 (8.48 MB/s) - 'secret.yaml' saved [444/444]

secret/vxflexos-config created

NAME: vxflexos
LAST DEPLOYED: Mon Mar 4 07:00:23 2024
NAMESPACE: vxflexos
STATUS: deployed
REVISION: 1
TEST SUITE: None
--2024-03-04 07:00:23-- https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/powerflex/powerflex-storage-class.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3168 (3.1K) [text/plain]
Saving to: 'powerflex-storage-class.yaml'

powerflex-storage-class.yaml 100%[=====================================================================================================>] 3.09K --.-KB/s in 0s

2024-03-04 07:00:23 (28.4 MB/s) - 'powerflex-storage-class.yaml' saved [3168/3168]

--2024-03-04 07:00:23-- https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/powerflex/powerflex-volumesnapshotclass.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 562 [text/plain]
Saving to: 'powerflex-volumesnapshotclass.yaml'

powerflex-volumesnapshotclass.yaml 100%[=====================================================================================================>] 562 --.-KB/s in 0s

2024-03-04 07:00:24 (36.5 MB/s) - 'powerflex-volumesnapshotclass.yaml' saved [562/562]

storageclass.storage.k8s.io/powerflex-sc created
volumesnapshotclass.snapshot.storage.k8s.io/vxflexos-snapclass created

As a result of the script execution, the resource creation outputs are observed below.

The secret named “vxflex-config” that has all the connection details for the APEX Block on AWS PowerFlex 4.5 cluster

kubectl get secret -n vxflexos
NAME TYPE DATA AGE
sh.helm.release.v1.vxflexos.v1 helm.sh/release.v1 1 13m
vxflexos-config Opaque 1 13m

The CSI driver (controller and node pods) in a namespace called vxflexos

kubectl get pods -n vxflexos
NAME READY STATUS RESTARTS AGE
vxflexos-controller-858d55dd87-8xnkc 5/5 Running 0 12m
vxflexos-controller-858d55dd87-bvdws 5/5 Running 0 12m
vxflexos-node-9zzzm 2/2 Running 0 12m
vxflexos-node-vxvrk 2/2 Running 0 12m

One can see from an individual controller pod, that the node-group label of group: md-0 has been rendered as nodeSelector, which means the controller is indeed running on our node-group created above

kubectl describe pod vxflexos-controller-858d55dd87-8xnkc -n vxflexos

Name: vxflexos-controller-858d55dd87-8xnkc
Namespace: vxflexos
Priority: 0
Service Account: vxflexos-controller
Node: ip-172-26-2-157.eu-west-1.compute.internal/172.26.2.157
Start Time: Mon, 04 Mar 2024 07:00:23 +0000
Labels: name=vxflexos-controller
pod-template-hash=858d55dd87
vg-snapshotter-enabled=false
Annotations: kubectl.kubernetes.io/default-container: driver
Status: Running
IP: 172.26.2.174
IPs:
IP: 172.26.2.174
Controlled By: ReplicaSet/vxflexos-controller-858d55dd87
Containers:
attacher:
Container ID: containerd://ad2d018f923eebf15f7010d17922126143a9fa8eccf279c47e371e7687484ca4
Image: registry.k8s.io/sig-storage/csi-attacher:v4.4.2
Image ID: registry.k8s.io/sig-storage/csi-attacher@sha256:11b955fe4da278aa0e8ca9d6fd70758f2aec4b0c1e23168c665ca345260f1882
Port: <none>
Host Port: <none>
Args:
--csi-address=$(ADDRESS)
--v=5
--leader-election=true
State: Running
Started: Mon, 04 Mar 2024 07:00:28 +0000
Ready: True
Restart Count: 0
Environment:
ADDRESS: /var/run/csi/csi.sock
Mounts:
/var/run/csi from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgwdv (ro)
provisioner:
Container ID: containerd://46d794faf7291482c301e5e0bfc73444cc6ea140789094648701fac58de9cf14
Image: registry.k8s.io/sig-storage/csi-provisioner:v3.6.2
Image ID: registry.k8s.io/sig-storage/csi-provisioner@sha256:49b94f975603d85a1820b72b1188e5b351d122011b3e5351f98c49d72719aa78
Port: <none>
Host Port: <none>
Args:
--csi-address=$(ADDRESS)
--feature-gates=Topology=true
--volume-name-prefix=k8s
--volume-name-uuid-length=10
--leader-election=true
--timeout=120s
--v=5
--default-fstype=ext4
--extra-create-metadata
--enable-capacity=true
--capacity-ownerref-level=2
--capacity-poll-interval=5m
State: Running
Started: Mon, 04 Mar 2024 07:00:32 +0000
Ready: True
Restart Count: 0
Environment:
ADDRESS: /var/run/csi/csi.sock
NAMESPACE: vxflexos (v1:metadata.namespace)
POD_NAME: vxflexos-controller-858d55dd87-8xnkc (v1:metadata.name)
Mounts:
/var/run/csi from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgwdv (ro)
snapshotter:
Container ID: containerd://343d263a681613538bb8fc434eee8e3aa07165769e3b814455acd5aa2ceadf69
Image: registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2
Image ID: registry.k8s.io/sig-storage/csi-snapshotter@sha256:4c5a1b57e685b2631909b958487f65af7746361346fcd82a8635bea3ef14509d
Port: <none>
Host Port: <none>
Args:
--csi-address=$(ADDRESS)
--timeout=120s
--v=5
--leader-election=true
State: Running
Started: Mon, 04 Mar 2024 07:00:37 +0000
Ready: True
Restart Count: 0
Environment:
ADDRESS: /var/run/csi/csi.sock
Mounts:
/var/run/csi from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgwdv (ro)
resizer:
Container ID: containerd://8db0d9d3f007edb44b2c4ea655f7b0480a7bfdc7359db6030611639548913184
Image: registry.k8s.io/sig-storage/csi-resizer:v1.9.2
Image ID: registry.k8s.io/sig-storage/csi-resizer@sha256:e998f22243869416f9860fc6a1fb07d4202eac8846defc1b85ebd015c1207605
Port: <none>
Host Port: <none>
Args:
--csi-address=$(ADDRESS)
--v=5
--leader-election=true
State: Running
Started: Mon, 04 Mar 2024 07:00:42 +0000
Ready: True
Restart Count: 0
Environment:
ADDRESS: /var/run/csi/csi.sock
Mounts:
/var/run/csi from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgwdv (ro)
driver:
Container ID: containerd://9e57572d2cc819ea8b55b2f08a595fc191231ac8b094f8b5b9d4ee200421f304
Image: dellemc/csi-vxflexos:v2.9.2
Image ID: docker.io/dellemc/csi-vxflexos@sha256:691df8d3467828c2174707e26fb0907aab4321f3abed382a7b309e8b24262b9b
Port: <none>
Host Port: <none>
Command:
/csi-vxflexos.sh
Args:
--array-config=/vxflexos-config/config
--driver-config-params=/vxflexos-config-params/driver-config-params.yaml
State: Running
Started: Mon, 04 Mar 2024 07:00:42 +0000
Ready: True
Restart Count: 0
Environment:
CSI_ENDPOINT: /var/run/csi/csi.sock
X_CSI_MODE: controller
X_CSI_VXFLEXOS_ENABLESNAPSHOTCGDELETE: false
X_CSI_VXFLEXOS_ENABLELISTVOLUMESNAPSHOT: false
SSL_CERT_DIR: /certs
X_CSI_POWERFLEX_EXTERNAL_ACCESS:
Mounts:
/var/run/csi from socket-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qgwdv (ro)
/vxflexos-config from vxflexos-config (rw)
/vxflexos-config-params from vxflexos-config-params (rw)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
socket-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vxflexos-config:
Type: Secret (a volume populated by a Secret)
SecretName: vxflexos-config
Optional: false
vxflexos-config-params:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vxflexos-config-params
Optional: false
kube-api-access-qgwdv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: group=md-0
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned vxflexos/vxflexos-controller-858d55dd87-8xnkc to ip-172-26-2-157.eu-west-1.compute.internal
Normal Pulling 22m kubelet Pulling image "registry.k8s.io/sig-storage/csi-attacher:v4.4.2"
Normal Pulled 22m kubelet Successfully pulled image "registry.k8s.io/sig-storage/csi-attacher:v4.4.2" in 4.38s (4.38s including waiting)
Normal Created 22m kubelet Created container attacher
Normal Started 22m kubelet Started container attacher
Normal Pulling 22m kubelet Pulling image "registry.k8s.io/sig-storage/csi-provisioner:v3.6.2"
Normal Pulled 22m kubelet Successfully pulled image "registry.k8s.io/sig-storage/csi-provisioner:v3.6.2" in 3.797s (3.797s including waiting)
Normal Created 22m kubelet Created container provisioner
Normal Started 22m kubelet Started container provisioner
Normal Pulling 22m kubelet Pulling image "registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2"
Normal Pulled 22m kubelet Successfully pulled image "registry.k8s.io/sig-storage/csi-snapshotter:v6.3.2" in 4.129s (4.129s including waiting)
Normal Created 22m kubelet Created container snapshotter
Normal Started 22m kubelet Started container snapshotter
Normal Pulling 22m kubelet Pulling image "registry.k8s.io/sig-storage/csi-resizer:v1.9.2"
Normal Pulled 21m kubelet Successfully pulled image "registry.k8s.io/sig-storage/csi-resizer:v1.9.2" in 4.399s (4.399s including waiting)
Normal Created 21m kubelet Created container resizer
Normal Started 21m kubelet Started container resizer
Normal Pulled 21m kubelet Container image "dellemc/csi-vxflexos:v2.9.2" already present on machine
Normal Created 21m kubelet Created container driver
Normal Started 21m kubelet Started container driver

The same is true for the CSI pod for the Node. The nodeSelector is rendered as group: md-0 ensuring that the CSI Node pod only runs on the targeted node-group that we created above.

kubectl describe pod vxflexos-node-9zzzm -n vxflexos

Name: vxflexos-node-9zzzm
Namespace: vxflexos
Priority: 0
Service Account: vxflexos-node
Node: ip-172-26-2-51.eu-west-1.compute.internal/172.26.2.51
Start Time: Mon, 04 Mar 2024 07:00:23 +0000
Labels: app=vxflexos-node
controller-revision-hash=6f495d4d59
pod-template-generation=1
Annotations: kubectl.kubernetes.io/default-container: driver
Status: Running
IP: 172.26.2.51
IPs:
IP: 172.26.2.51
Controlled By: DaemonSet/vxflexos-node
Containers:
driver:
Container ID: containerd://866ad13ceb6c376a25b1f2da8bed71ae62a6c69366c536ea986b3bdf8c68f85e
Image: dellemc/csi-vxflexos:v2.9.2
Image ID: docker.io/dellemc/csi-vxflexos@sha256:691df8d3467828c2174707e26fb0907aab4321f3abed382a7b309e8b24262b9b
Port: <none>
Host Port: <none>
Command:
/csi-vxflexos.sh
Args:
--array-config=/vxflexos-config/config
--driver-config-params=/vxflexos-config-params/driver-config-params.yaml
State: Running
Started: Mon, 04 Mar 2024 07:00:34 +0000
Ready: True
Restart Count: 0
Environment:
CSI_ENDPOINT: unix:///var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock
X_CSI_MODE: node
X_CSI_PRIVATE_MOUNT_DIR: /var/lib/kubelet/plugins/vxflexos.emc.dell.com/disks
X_CSI_ALLOW_RWO_MULTI_POD_ACCESS: false
X_CSI_MAX_VOLUMES_PER_NODE: 0
SSL_CERT_DIR: /certs
X_CSI_HEALTH_MONITOR_ENABLED: false
X_CSI_APPROVE_SDC_ENABLED: false
X_CSI_RENAME_SDC_ENABLED: false
Mounts:
/dev from dev (rw)
/noderoot from noderoot (rw)
/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices from volumedevices-path (rw)
/var/lib/kubelet/plugins/vxflexos.emc.dell.com from driver-path (rw)
/var/lib/kubelet/pods from pods-path (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-549zp (ro)
/vxflexos-config from vxflexos-config (rw)
/vxflexos-config-params from vxflexos-config-params (rw)
registrar:
Container ID: containerd://f6a7af5245dd0f7b5902eda64cce9ab6b12d413ffec36da5baa91f17578e7b6e
Image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1
Image ID: registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:2cddcc716c1930775228d56b0d2d339358647629701047edfdad5fcdfaf4ebcb
Port: <none>
Host Port: <none>
Args:
--v=5
--csi-address=$(ADDRESS)
--kubelet-registration-path=/var/lib/kubelet/plugins/vxflexos.emc.dell.com/csi_sock
State: Running
Started: Mon, 04 Mar 2024 07:00:39 +0000
Ready: True
Restart Count: 0
Environment:
ADDRESS: /csi/csi_sock
KUBE_NODE_NAME: (v1:spec.nodeName)
Mounts:
/csi from driver-path (rw)
/registration from registration-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-549zp (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
registration-dir:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/plugins_registry/
HostPathType: DirectoryOrCreate
driver-path:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/plugins/vxflexos.emc.dell.com
HostPathType: DirectoryOrCreate
volumedevices-path:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices
HostPathType: DirectoryOrCreate
pods-path:
Type: HostPath (bare host directory volume)
Path: /var/lib/kubelet/pods
HostPathType: Directory
noderoot:
Type: HostPath (bare host directory volume)
Path: /
HostPathType: Directory
dev:
Type: HostPath (bare host directory volume)
Path: /dev
HostPathType: Directory
scaleio-path-opt:
Type: HostPath (bare host directory volume)
Path: /opt/emc/scaleio/sdc/bin
HostPathType: DirectoryOrCreate
sdc-storage:
Type: HostPath (bare host directory volume)
Path: /var/emc-scaleio
HostPathType: DirectoryOrCreate
udev-d:
Type: HostPath (bare host directory volume)
Path: /etc/udev/rules.d
HostPathType: Directory
os-release:
Type: HostPath (bare host directory volume)
Path: /etc/os-release
HostPathType: File
host-opt-emc-path:
Type: HostPath (bare host directory volume)
Path: /opt/emc
HostPathType: Directory
vxflexos-config:
Type: Secret (a volume populated by a Secret)
SecretName: vxflexos-config
Optional: false
vxflexos-config-params:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vxflexos-config-params
Optional: false
kube-api-access-549zp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: group=md-0
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned vxflexos/vxflexos-node-9zzzm to ip-172-26-2-51.eu-west-1.compute.internal
Normal Pulling 25m kubelet Pulling image "dellemc/csi-vxflexos:v2.9.2"
Normal Pulled 25m kubelet Successfully pulled image "dellemc/csi-vxflexos:v2.9.2" in 9.841s (9.842s including waiting)
Normal Created 25m kubelet Created container driver
Normal Started 25m kubelet Started container driver
Normal Pulling 25m kubelet Pulling image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1"
Normal Pulled 25m kubelet Successfully pulled image "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.1" in 5.174s (5.174s including waiting)
Normal Created 25m kubelet Created container registrar
Normal Started 25m kubelet Started container registrar

External snapshot controllers used to create volume snapshots

kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
aws-node-p9sdw 2/2 Running 2 (58m ago) 64m
aws-node-pshfm 2/2 Running 2 (57m ago) 64m
coredns-68bd859788-4jdlq 1/1 Running 1 (58m ago) 69m
coredns-68bd859788-k7xsz 1/1 Running 1 (58m ago) 69m
kube-proxy-5zstg 1/1 Running 1 (57m ago) 64m
kube-proxy-wgsxf 1/1 Running 1 (58m ago) 64m
snapshot-controller-75d45d848c-gb9mp 1/1 Running 0 14m
snapshot-controller-75d45d848c-mtcgn 1/1 Running 0 14m

Storage class named powerflex-sc with the provisioner as csi-vxflexos.dellemc.com. This is the storage class we will used to deploy any stateful applications that need to be delivered atop APEX Block on AWS

kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 16h
powerflex-sc csi-vxflexos.dellemc.com Delete WaitForFirstConsumer true 15m

Describing the powerflex storage class

kubectl describe storageclass powerflex-sc
Name: powerflex-sc
IsDefaultClass: No
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: csi-vxflexos.dellemc.com
Parameters: csi.storage.k8s.io/fstype=ext4,storagepool=default,systemID=d7f6c6427c56ab0f
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
AllowedTopologies:
Term 0: csi-vxflexos.dellemc.com/d7f6c6427c56ab0f in [csi-vxflexos.dellemc.com]
Events: <none>

Volume snapshot class

kubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY AGE
vxflexos-snapclass csi-vxflexos.dellemc.com Delete 18m

Describing the volume snapshot class. One can see that this snapshot class will provision all snapshots using the powerflex csi driver named csi-vxflexos.dellemc.com

kubectl describe volumesnapshotclass vxflexos-snapclass
Name: vxflexos-snapclass
Namespace:
Labels: velero.io/csi-volumesnapshot-class=true
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Deletion Policy: Delete
Driver: csi-vxflexos.dellemc.com
Kind: VolumeSnapshotClass
Metadata:
Creation Timestamp: 2024-03-04T07:00:24Z
Generation: 1
Resource Version: 140779
UID: a532d10b-9f27-41a5-a14e-777bb51fc274
Events: <none>

Let’s view these resources in the EKS console.

With all of these set and our CSI drivers in place, we can look forward to deploying a sample MySQL persistent workload with its data directory persistent on APEX Block on AWS.

APEX Block on AWS with PowerFlex 4.5 SDS supports both modes of volume creation, Static and Dynamic provisioning.

Let’s observe the static provisioning mode.

To do so, we will create a volume in APEX Block on AWS via the PowerFlex GUI (although this can be easily done via a Terraform provider as documented here)

Note the volume ID is 94ae8e3f00000003

Next, let’s download the template to deploy a PV, PVC, MySQL and Adminer web UI for MySQL.

Once the template is downloaded, ensure to edit the same and change the value of the persistent volume as seen in the PowerFlex GUI

cd $HOME/$CLUSTER_NAME
wget https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/mysql/standalone/powerflex/aws-mysql-all-in-one.yaml

Edit the template and change the volume-id against the field named volumeHandle in the sub-block for PV creation.

Once done, we can then apply the entire YAML with kubectl.

kubectl create -f $HOME/$CLUSTER_NAME/aws-mysql-all-in-one.yaml

namespace/demo created
service/adminer created
deployment.apps/adminer created
persistentvolume/mysql-pv created
persistentvolumeclaim/mysql-pv-claim created
service/mysql created
deployment.apps/mysql created

Let’s verify the resources created.

The Adminer and MySQL pods are running successfully. This means MySQL should have successfully mounted the persitent volume via the PVC

kubectl get pods -n demo
NAME READY STATUS RESTARTS AGE
adminer-886ccfbcd-6ckx9 1/1 Running 0 48s
mysql-5b767d9847-m5m6l 1/1 Running 0 48s

The persistent volume claim used by the MySQL pod shows the powerflex-sc as the storage class and is in a bound state

kubectl get pvc -n demo
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
mysql-pv-claim Bound mysql-pv 8Gi RWO powerflex-sc <unset> 4m25s

And the actual persistent volume created by the CSI driver

kubectl get pv -n demo
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
mysql-pv 8Gi RWO Retain Bound demo/mysql-pv-claim powerflex-sc <unset> 5m23s

We can also observe that the persistent volume created is using the same volumeHandle value as seen for the volume-id in the PowerFlex GUI

kubectl describe pv mysql-pv -n demo
Name: mysql-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection external-attacher/csi-vxflexos-dellemc-com]
StorageClass: powerflex-sc
Status: Bound
Claim: demo/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 8Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi-vxflexos.dellemc.com
FSType: ext4
VolumeHandle: 94ae8e3f00000003
ReadOnly: false
VolumeAttributes: <none>
Events: <none>

Let’s verify the persistent workload resources for MySQL in the EKS console

At this point we can clearly observe a successful deployment of a MySQL pod using the PowerFlex CSI with APEX Block on AWS.

Let’s do a quick and dirty test to insert some data into the MySQL pod with the use of Adminer GUI. Being an internal cluster, our Adminer GUI pod is exposed via an internal AWS Network Load Balancer

kubectl get svc -n demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adminer LoadBalancer 10.100.162.63 a05be4d8dbfd04e4fbc094d88939288d-368c9caa1d9160ca.elb.eu-west-1.amazonaws.com 80:31817/TCP 7m13s
mysql ClusterIP None <none> 3306/TCP 7m13s

Let’s browse the NLB to access the Adminer GUI. The server name is mysql with the username as root and password as fake_password

Let’s create a sample database named “cars”

At this point, this is an empty database, we will use the SQL command option on the GUI to populate this database with sample data.

SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;--
CREATE TABLE `car` (
`id` int(11) NOT NULL,
`type` text NOT NULL,
`country` text NOT NULL,
`manufacturer` text NOT NULL,
`create_date` date NOT NULL,
`model` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `car` (`id`, `type`, `country`, `manufacturer`, `create_date`, `model`) VALUES
(1, 'Small', 'Japon', 'Acura', '1931-02-01', 'Integra'),
(2, 'Midsize', 'Japon', 'Acura', '1959-07-30', 'Legend'),
(3, 'Compact', 'Germany', 'Audi', '1970-07-30', '90'),
(4, 'Midsize', 'Germany', 'Audi', '1963-10-04', '100'),
(5, 'Midsize', 'Germany', 'BMW', '1931-09-08', '535i'),
(6, 'Midsize', 'USA', 'Buick', '1957-02-20', 'Century'),
(7, 'Large', 'USA', 'Buick', '1968-10-23', 'LeSabre'),
(8, 'Large', 'USA', 'Buick', '1970-08-17', 'Roadmaster'),
(9, 'Midsize', 'USA', 'Buick', '1962-08-02', 'Riviera'),
(10, 'Large', 'USA', 'Cadillac', '1956-12-01', 'DeVille'),
(11, 'Midsize', 'USA', 'Cadillac', '1957-07-30', 'Seville'),
(12, 'Compact', 'USA', 'Chevrolet', '1952-06-18', 'Cavalier'),
(13, 'Compact', 'USA', 'Chevrolet', '1947-06-26', 'Corsica'),
(14, 'Sporty', 'USA', 'Chevrolet', '1940-05-27', 'Camaro'),
(15, 'Midsize', 'USA', 'Chevrolet', '1949-02-21', 'Lumina'),
(16, 'Van', 'USA', 'Chevrolet', '1944-11-02', 'Lumina_APV'),
(17, 'Van', 'USA', 'Chevrolet', '1962-06-07', 'Astro'),
(18, 'Large', 'USA', 'Chevrolet', '1951-01-11', 'Caprice'),
(19, 'Sporty', 'USA', 'Chevrolet', '1966-11-01', 'Corvette'),
(20, 'Large', 'USA', 'Chrysler', '1964-07-10', 'Concorde'),
(21, 'Compact', 'USA', 'Chrysler', '1938-05-06', 'LeBaron'),
(22, 'Large', 'USA', 'Chrysler', '1960-07-07', 'Imperial'),
(23, 'Small', 'USA', 'Dodge', '1943-06-02', 'Colt'),
(24, 'Small', 'USA', 'Dodge', '1934-02-27', 'Shadow'),
(25, 'Compact', 'USA', 'Dodge', '1932-02-26', 'Spirit'),
(26, 'Van', 'USA', 'Dodge', '1946-06-12', 'Caravan'),
(27, 'Midsize', 'USA', 'Dodge', '1928-03-02', 'Dynasty'),
(28, 'Sporty', 'USA', 'Dodge', '1966-05-20', 'Stealth'),
(29, 'Small', 'USA', 'Eagle', '1941-05-12', 'Summit'),
(30, 'Large', 'USA', 'Eagle', '1963-09-17', 'Vision'),
(31, 'Small', 'USA', 'Ford', '1964-10-22', 'Festiva'),
(32, 'Small', 'USA', 'Ford', '1930-12-02', 'Escort'),
(33, 'Compact', 'USA', 'Ford', '1950-04-19', 'Tempo'),
(34, 'Sporty', 'USA', 'Ford', '1940-06-18', 'Mustang'),
(35, 'Sporty', 'USA', 'Ford', '1941-05-24', 'Probe'),
(36, 'Van', 'USA', 'Ford', '1935-01-27', 'Aerostar'),
(37, 'Midsize', 'USA', 'Ford', '1947-10-08', 'Taurus'),
(38, 'Large', 'USA', 'Ford', '1962-02-28', 'Crown_Victoria'),
(39, 'Small', 'USA', 'Geo', '1965-10-30', 'Metro'),
(40, 'Sporty', 'USA', 'Geo', '1955-07-07', 'Storm'),
(41, 'Sporty', 'Japon', 'Honda', '1955-06-08', 'Prelude'),
(42, 'Small', 'Japon', 'Honda', '1967-09-16', 'Civic'),
(43, 'Compact', 'Japon', 'Honda', '1938-06-26', 'Accord'),
(44, 'Small', 'South Korea', 'Hyundai', '1940-02-25', 'Excel');
ALTER TABLE `car`
ADD PRIMARY KEY (`id`);
ALTER TABLE `car`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=45;
COMMIT;/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

Now, we have a database named cars with a sample set of data and tables.

Let’s now delete our MySQL pod and resurrect it to verify data persistence via the PVC rendered on APEX Block for AWS.

kubectl get deployment -n demo
NAME READY UP-TO-DATE AVAILABLE AGE
adminer 1/1 1 1 16m
mysql 1/1 1 1 16m
kubectl delete deployment mysql -n demo
deployment.apps "mysql" deleted
kubectl get pods -n demo
NAME READY STATUS RESTARTS AGE
adminer-886ccfbcd-6ckx9 1/1 Running 0 17m

MySQL pod is deleted. Let’s resurrect it again by applying the sample YAML file used earlier. Ignore the errors for the resources that are already existing., we can see the mysql deployment is recreated.

kubectl create -f $HOME/$CLUSTER_NAME/aws-mysql-all-in-one.yaml
deployment.apps/mysql created
Error from server (AlreadyExists): error when creating "/home/ubuntu/eks-apex-block-test-1/aws-mysql-all-in-one.yaml": namespaces "demo" already exists
Error from server (AlreadyExists): error when creating "/home/ubuntu/eks-apex-block-test-1/aws-mysql-all-in-one.yaml": services "adminer" already exists
Error from server (AlreadyExists): error when creating "/home/ubuntu/eks-apex-block-test-1/aws-mysql-all-in-one.yaml": deployments.apps "adminer" already exists
Error from server (AlreadyExists): error when creating "/home/ubuntu/eks-apex-block-test-1/aws-mysql-all-in-one.yaml": persistentvolumes "mysql-pv" already exists
Error from server (AlreadyExists): error when creating "/home/ubuntu/eks-apex-block-test-1/aws-mysql-all-in-one.yaml": persistentvolumeclaims "mysql-pv-claim" already exists
Error from server (AlreadyExists): error when creating "/home/ubuntu/eks-apex-block-test-1/aws-mysql-all-in-one.yaml": services "mysql" already exists

Let’s verify from the Adminer GUI if the data, i.e. the database named cars with a table named car with sample data has persisted.

Let’s test out a volume snapshot creation. Download the snapshot sample and edit the name value in the template to reflect the date time of the snapshot being taken.

wget https://raw.githubusercontent.com/thecloudgarage/eks-anywhere/main/mysql/standalone/powerflex/snapshot-sample.yaml -P $HOME/$CLUSTER_NAME/

As one can see I have edited the template once it’s download to give a meaningful name value

cat $HOME/$CLUSTER_NAME/snapshot-sample.yaml

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: mysql-snapshot-powerflex-04032024-11-53-am
namespace: demo
labels:
name: mysql-snapshot-powerflex-datetime
spec:
volumeSnapshotClassName: vxflexos-snapclass
source:
persistentVolumeClaimName: mysql-pv-claim
ubuntu@eksa-admin-machine-25012024:~/eks-apex-block-test-1$ kub

Create a volume snapshot

kubectl create -f $HOME/$CLUSTER_NAME/snapshot-sample.yaml

volumesnapshot.snapshot.storage.k8s.io/mysql-snapshot-powerflex-04032024-11-53-am created
kubectl get volumesnapshot -n demo
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
mysql-snapshot-powerflex-04032024-11-53-am true mysql-pv-claim 8Gi vxflexos-snapclass snapcontent-ea8dea98-6168-4572-845c-a14128bde472 5h27m 80s
 kubectl describe volumesnapshotcontent -n demo
Name: snapcontent-ea8dea98-6168-4572-845c-a14128bde472
Namespace:
Labels: <none>
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshotContent
Metadata:
Creation Timestamp: 2024-03-04T13:00:23Z
Finalizers:
snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection
Generation: 1
Resource Version: 216623
UID: 7c006953-8ebb-4ccd-a563-0d0f222bbf78
Spec:
Deletion Policy: Delete
Driver: csi-vxflexos.dellemc.com
Source:
Volume Handle: 94ae8e3f00000003
Volume Snapshot Class Name: vxflexos-snapclass
Volume Snapshot Ref:
API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshot
Name: mysql-snapshot-powerflex-04032024-11-53-am
Namespace: demo
Resource Version: 216603
UID: ea8dea98-6168-4572-845c-a14128bde472
Status:
Creation Time: 1709537634000000000
Ready To Use: true
Restore Size: 8589934592
Snapshot Handle: d7f6c6427c56ab0f-94ae8e4100000005
Events: <none>

We can verify this snapshot volume in the PowerFlex GUI. We can correspond the volume handle and the uid values from the kubectl output above with the values presented in PowerFlex GUI.

Now with the volume snapshot in place, we can look to recover our database with the same. To observe the same, let’s delete the entire setup that we created via the complete YAML file

kubectl delete deployment mysql -n demo
deployment.apps "mysql" deleted

kubectl delete pvc mysql-pv-claim -n demo
persistentvolumeclaim "mysql-pv-claim" deleted

kubectl delete pv mysql-pv
persistentvolume "mysql-pv" deleted

As you can see., boom! all the primary components for MySQL are gone.

Let’s delete the actual volume in PowerFlex GUI also to ensure only the snapshot volume remains.

Only the volume created as a result of the snapshot exists in the APEX Block for AWS PowerFlex cluster

Let’s note the volume ID, i.e. 94ae8e4100000005

Now we will alter our deployment YAML. i.e. aws-mysql-all-in-one.yaml to rebuild the PV from this volume ID as the volume handle. An excerpt is shown below

---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
namespace: demo
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: powerflex-sc
volumeMode: Filesystem
csi:
driver: csi-vxflexos.dellemc.com
volumeHandle: 94ae8e4100000005
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: demo
labels:
name: mysql-pv-claim
csi: csi-vxflexos.dellemc.com
spec:
volumeName: mysql-pv
storageClassName: powerflex-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---

Let’s re-deploy the entire YAML, which will rebuild the necessary resources which were deleted earlier, PV, PVC and the MySQL pod. The only difference being that this time the PV is being built from the snapshot volume.

Ignore the error for existing resources.

kubectl create -f  aws-mysql-all-in-one.yaml

persistentvolume/mysql-pv created
persistentvolumeclaim/mysql-pv-claim created
deployment.apps/mysql created
Error from server (AlreadyExists): error when creating "aws-mysql-all-in-one.yaml": namespaces "demo" already exists
Error from server (AlreadyExists): error when creating "aws-mysql-all-in-one.yaml": services "adminer" already exists
Error from server (AlreadyExists): error when creating "aws-mysql-all-in-one.yaml": deployments.apps "adminer" already exists
Error from server (AlreadyExists): error when creating "aws-mysql-all-in-one.yaml": services "mysql" already exists

Since the Persistent volume has been created from the volume that was a result of the snapshot, we should already have the database, table and the sample data already populated. Let’s verify the same via the Adminer GUI.

So, the database exists., that’s a good sign

The database table also exists! And let’s see if the sample data is persisted.

FANTASTIC!!!! That’s a complete restoration.

This brings us to end of a comprehensive practical demonstration of integrating your EKS clusters running in AWS cloud with APEX Block for AWS offering from Dell.

Hope you found it insightful.

Happy EKS’ing with APEX Block for AWS offreing from Dell!

cheers,

Ambar@thecloudgarage

#iwork4dell

--

--

Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies