EKS Anywhere., PART-2 Dell EMC Unity-XT CSI
This article is a part of the multi-part story of EKS Anywhere. Access it here EKS Anywhere, extending the Hybrid cloud momentum | by Ambar Hassani | Apr, 2022 | Medium
TThis is the Part-2 of the Unity-XT CSI and EKS Anywhere article. Part-1 of this article can be found at this hyperlink
In Part-1, we reached to a point of successfully deploying the CSI drivers for Unity-XT. In this part-2, we will deploy test-use cases and observe various functionality especially around snapshots, recovery, etc. The test use-case is a MySQL deployment wherein the data volume will be persisted via the Unity-XT CSI over NFS protocol. To begin.,
Step-1 Create NFS Storage Class
First we start off by creating the Unity-XT storage class that will be used by our persistent volumes for NFS protocol
cd $HOME/csi-unitycp $HOME/eks-anywhere/unity/unity-xt-nas-storage-class.yaml .
Now if you open this unity-xt-nas-storage-class.yaml
file, the contents would like this.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: unity-nas
annotations:
description: unity storage class for eksa testing
provisioner: csi-unity.dellemc.com
parameters:
arrayId: VIRT2148DRW2V6
isDataReductionEnabled: 'false'
nasServer: 'nas_2'
protocol: NFS
storagePool: 'pool_1'
thinProvisioned: 'true'
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
You will need to edit the highlighted portion of the file., namely the arrayId, nasServer & storagePool
values
The most important thing to highlight is that these are CLI ID references as you can see from the underscored values, e.g. nas_2 and pool_1
How do I get these values. Simply login to the Unity-XT console and navigate as shown in the below visuals to get the arrayId, nasServer CLI ID and storagePool CLI ID
Replace these values in the unity-xt-nas-storage-class.yaml
file and then apply it to create the storage class
cd $HOME/csi-unitykubectl create -f unity-xt-nas-storage-class.yamlOnce applied, we can observe the Storage class created with the provisioner as csi-unity.dellemc.comkubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) csi.vsphere.vmware.com Delete Immediate false 4h7m
unity-nas csi-unity.dellemc.com Delete Immediate true 21m
The next sections will focus on workload and persistence testing. Nothing too fancy, as the intent is to mainly validate all of the above implementation in a workload scenario
Step-2 Create Persistent Volume Claim
Once the storage class is created then it’s time to create the Persistent Volume Claim. We will be evaluating a test MySQL workload to use the persistent volumes created on the Unity-XT system.
cd $HOME/csi-unitycp -r $HOME/eks-anywhere/mysql .
cd $HOME/csi-unity/mysql/standalone/unity-nas/kubectl create -f pvc.yamlOnce applied, we can see the pvc will transition from a pending to a bound statekubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim-unity-nas Pending unity-nas 15sAfter some timekubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim-unity-nas Bound testunitycsicluster01-vol-2e295adfce 8Gi RWO unity-nas 28skubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
testunitycsicluster01-vol-2e295adfce 8Gi RWO Delete Bound default/mysql-pv-claim-unity-nas unity-nas 19mkubectl describe pv testunitycsicluster01-vol-2e295adfce
Name: testunitycsicluster01-vol-2e295adfce
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi-unity.dellemc.com
Finalizers: [kubernetes.io/pv-protection external-attacher/csi-unity-dellemc-com]
StorageClass: unity-nas
Status: Bound
Claim: default/mysql-pv-claim-unity-nas
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 8Gi
Node Affinity:
Required Terms:
Term 0: csi-unity.dellemc.com/virt2148drw2v6-nfs in [true]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi-unity.dellemc.com
FSType: ext4
VolumeHandle: testunitycsicluster01-vol-2e295adfce-NFS-virt2148drw2v6-fs_20
ReadOnly: false
VolumeAttributes: arrayId=virt2148drw2v6
protocol=NFS
storage.kubernetes.io/csiProvisionerIdentity=1653024108543-8081-csi-unity.dellemc.com
volumeId=fs_20
Events: <none>
We can see a volume named testunitycsicluster01-vol-2e295adfce
of 8 Gigabytes has been successfully provisioned. As you can note the prefix that was configured in the myvalues.yaml helps herein to identify the volume.
Let’s see this volume in the Unity-XT console
With the above Persistent volume in place, we can now start defining a sample persistent workload and other operations around the persistency, backup and recovery, etc.
Step-3 Create MySQL instance with persistence
cd $HOME/csi-unity/mysql/standalone/unity-nas/
kubectl create -f mysql.yamlkubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-86586c7497-w7qj2 1/1 Running 0 18mOnce the pod is in running status, we can see that the persistent volume has been successfully mounted via the claim name defined in the mysql yaml filekubectl describe pod mysql-86586c7497-w7qj2
Name: mysql-86586c7497-w7qj2
Namespace: default
Priority: 0
Node: testunitycsicluster01-md-0-59f9568584-sfhf9/172.24.167.23
Start Time: Fri, 20 May 2022 05:31:29 +0000
Labels: app=mysql
pod-template-hash=86586c7497
Annotations: <none>
Status: Running
IP: 192.168.1.64
IPs:
IP: 192.168.1.64
Controlled By: ReplicaSet/mysql-86586c7497
Containers:
mysql:
Container ID: containerd://89151c03b9aa25e2af21c71be663c4c405d226465ce85e854462be8214d0f650
Image: mysql:5.6
Image ID: docker.io/library/mysql@sha256:20575ecebe6216036d25dab5903808211f1e9ba63dc7825ac20cb975e34cfcae
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 20 May 2022 05:32:12 +0000
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7wcw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim-unity-nas
ReadOnly: false
kube-api-access-p7wcw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/mysql-86586c7497-w7qj2 to testunitycsicluster01-md-0-59f9568584-sfhf9
Normal SuccessfulAttachVolume 18m attachdetach-controller AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-2e295adfce"
Normal Pulling 18m kubelet Pulling image "mysql:5.6"
Normal Pulled 18m kubelet Successfully pulled image "mysql:5.6" in 31.013962761s
Normal Created 18m kubelet Created container mysql
Normal Started 18m kubelet Started container mysql
Step-4 Create MetalLB load balancer
The MetalLB load balancer will be required to front-end the adminer web client through which we will manage the MySQL database. As a pre-requisite we should have collected at-least one Static IP addresses for the exposed load balanced service of adminer pod
Keep the static IP as mentioned in the pre-requisites handy!!!In my case the static IP range is 172.24.165.41 to 41, just a single IP for testing purposes. You can have a range defined per your comforthelm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --wait --timeout 15m --namespace metallb-system --create-namespaceNow you can configure it via its CRs. Please refer to the metallb official docs on how to use the CRs.Next we will use the Custom Resources to create the IP address pools and advertise themcat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.24.165.41-172.24.165.41
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
EOF
Step-5 Deploy Adminer application
The adminer application is a web-based front end for MySQL database. It runs as a PHP web application and we will use it to create and update our database in MySQL for persistence testing.
cd $HOME/csi-unity/
cp -r $HOME/eks-anywhere/adminer .cd $HOME/csi-unity/adminer/kubectl create -f adminer-deployment.yaml
kubectl create -f adminer-service.yamlGive it some time and we will see the kube-vip will perform the background magic to finally allocate the external static IP which will be used for the LoadBalanced servicekubectl get pods
NAME READY STATUS RESTARTS AGE
adminer-7fcdb7c8d-856gq 1/1 Running 0 25s
mysql-86586c7497-w7qj2 1/1 Running 0 27mkubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
adminer LoadBalancer 10.97.105.4 172.24.165.41 80:32302/TCP 57s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
mysql ClusterIP None <none> 3306/TCP 28m
Open the browser and hit the External IP as seen in the adminer service and enter the following values
- server: mysql
- username: root
- password: password
Click on Create Database and for simplicity sake we will name the DB as csi-unity-test
Click on SQL command and paste the below statement populating the csi-unity-test with sample data
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8mb4 */;--
-- Database: `DB`
---- ----------------------------------------------------------
-- Table structure for table `car`
--CREATE TABLE `car` (
`id` int(11) NOT NULL,
`type` text NOT NULL,
`country` text NOT NULL,
`manufacturer` text NOT NULL,
`create_date` date NOT NULL,
`model` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;--
-- Dumping data for table `car`
--INSERT INTO `car` (`id`, `type`, `country`, `manufacturer`, `create_date`, `model`) VALUES
(1, 'Small', 'Japon', 'Acura', '1931-02-01', 'Integra'),
(2, 'Midsize', 'Japon', 'Acura', '1959-07-30', 'Legend'),
(3, 'Compact', 'Germany', 'Audi', '1970-07-30', '90'),
(4, 'Midsize', 'Germany', 'Audi', '1963-10-04', '100'),
(5, 'Midsize', 'Germany', 'BMW', '1931-09-08', '535i'),
(6, 'Midsize', 'USA', 'Buick', '1957-02-20', 'Century'),
(7, 'Large', 'USA', 'Buick', '1968-10-23', 'LeSabre'),
(8, 'Large', 'USA', 'Buick', '1970-08-17', 'Roadmaster'),
(9, 'Midsize', 'USA', 'Buick', '1962-08-02', 'Riviera'),
(10, 'Large', 'USA', 'Cadillac', '1956-12-01', 'DeVille'),
(11, 'Midsize', 'USA', 'Cadillac', '1957-07-30', 'Seville'),
(12, 'Compact', 'USA', 'Chevrolet', '1952-06-18', 'Cavalier'),
(13, 'Compact', 'USA', 'Chevrolet', '1947-06-26', 'Corsica'),
(14, 'Sporty', 'USA', 'Chevrolet', '1940-05-27', 'Camaro'),
(15, 'Midsize', 'USA', 'Chevrolet', '1949-02-21', 'Lumina'),
(16, 'Van', 'USA', 'Chevrolet', '1944-11-02', 'Lumina_APV'),
(17, 'Van', 'USA', 'Chevrolet', '1962-06-07', 'Astro'),
(18, 'Large', 'USA', 'Chevrolet', '1951-01-11', 'Caprice'),
(19, 'Sporty', 'USA', 'Chevrolet', '1966-11-01', 'Corvette'),
(20, 'Large', 'USA', 'Chrysler', '1964-07-10', 'Concorde'),
(21, 'Compact', 'USA', 'Chrysler', '1938-05-06', 'LeBaron'),
(22, 'Large', 'USA', 'Chrysler', '1960-07-07', 'Imperial'),
(23, 'Small', 'USA', 'Dodge', '1943-06-02', 'Colt'),
(24, 'Small', 'USA', 'Dodge', '1934-02-27', 'Shadow'),
(25, 'Compact', 'USA', 'Dodge', '1932-02-26', 'Spirit'),
(26, 'Van', 'USA', 'Dodge', '1946-06-12', 'Caravan'),
(27, 'Midsize', 'USA', 'Dodge', '1928-03-02', 'Dynasty'),
(28, 'Sporty', 'USA', 'Dodge', '1966-05-20', 'Stealth'),
(29, 'Small', 'USA', 'Eagle', '1941-05-12', 'Summit'),
(30, 'Large', 'USA', 'Eagle', '1963-09-17', 'Vision'),
(31, 'Small', 'USA', 'Ford', '1964-10-22', 'Festiva'),
(32, 'Small', 'USA', 'Ford', '1930-12-02', 'Escort'),
(33, 'Compact', 'USA', 'Ford', '1950-04-19', 'Tempo'),
(34, 'Sporty', 'USA', 'Ford', '1940-06-18', 'Mustang'),
(35, 'Sporty', 'USA', 'Ford', '1941-05-24', 'Probe'),
(36, 'Van', 'USA', 'Ford', '1935-01-27', 'Aerostar'),
(37, 'Midsize', 'USA', 'Ford', '1947-10-08', 'Taurus'),
(38, 'Large', 'USA', 'Ford', '1962-02-28', 'Crown_Victoria'),
(39, 'Small', 'USA', 'Geo', '1965-10-30', 'Metro'),
(40, 'Sporty', 'USA', 'Geo', '1955-07-07', 'Storm'),
(41, 'Sporty', 'Japon', 'Honda', '1955-06-08', 'Prelude'),
(42, 'Small', 'Japon', 'Honda', '1967-09-16', 'Civic'),
(43, 'Compact', 'Japon', 'Honda', '1938-06-26', 'Accord'),
(44, 'Small', 'South Korea', 'Hyundai', '1940-02-25', 'Excel');--
-- Indexes for dumped tables
----
-- Indexes for table `car`
--
ALTER TABLE `car`
ADD PRIMARY KEY (`id`);--
-- AUTO_INCREMENT for dumped tables
----
-- AUTO_INCREMENT for table `car`
--
ALTER TABLE `car`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=45;
COMMIT;/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
And hit the EXECUTE button.
Once done, the csi-unity-test database is populated and we can click on the top blue ribbon link for the csi-unity-test to view the table “car”. Click on the table “car” and then select data
Step-6 Testing persistence for various scenarios
SMySQL POD Deletion: Herein we will delete the MySQL pod and verify reattachment of the persistent volume on restored pod
kubectl get pods
NAME READY STATUS RESTARTS AGE
adminer-5f64885f68-9wh7b 1/1 Running 0 6m33s
mysql-86586c7497-w7qj2 1/1 Running 0 78mTo test the persistence... delete the mysql pod. Kubernetes will restore the POD and our savior persistent volume is anyway helping us with the data being intact.kubectl delete pod mysql-86586c7497-w7qj2
pod "mysql-86586c7497-w7qj2" deletedkubectl get pods
NAME READY STATUS RESTARTS AGE
adminer-5f64885f68-9wh7b 1/1 Running 0 8m29s
mysql-86586c7497-smts8 1/1 Running 0 58sIt takes approximately 60 seconds or slightly more to recreate the pod and also attach the persistent volumekubectl describe pod mysql-86586c7497-smts8<Output-snip>
Normal SuccessfulAttachVolume 2m7s attachdetach-controller AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-2e295adfce"
<Output-snip>We can see our persistent volume has been attached to the restored pod
Close the browser session for the adminer application. Open a new browser session for the adminer application via the External IP and we can verify if the table “car” within the csi-unity-test database that we populated earlier still exists with all the entries.
Follow the navigation to database csi-unity-test, table car and select data
SMySQL Deployment Deletion: We will delete the deployment mysql itself and not just the pod.
We will delete the deployment mysql itself and not just the pod. Note that deleting the deployments have not effect on the persistent volume claims., as they have an independent lifecycle outside of the deployments.Also note that our persistent volume is setup with the reclaim policy of Delete. So unless we delete the pvc itself, our pv should remain intact and upon recreation of the mysql deployment, the re-attachment should automatically happen with the underlying data present (table car in the csi-unity-test db).cd $HOME/csi-unity/mysql/standalone/unity-nas/kubectl delete -f mysql.yaml
service "mysql" deleted
deployment.apps "mysql" deletedkubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim-unity-nas Bound testunitycsicluster01-vol-2e295adfce 8Gi RWO unity-nas 103mThe volume is still intactLet's recreate MySQL instancecd $HOME/csi-unity/mysql/standalone/unity-nas/kubectl create -f mysql.yaml
service/mysql created
deployment.apps/mysql createdkubectl get pods
NAME READY STATUS RESTARTS AGE
adminer-5f64885f68-9wh7b 1/1 Running 0 30m
mysql-86586c7497-jpfcc 1/1 Running 0 42skubectl describe pod mysql-86586c7497-jpfcc<output-snip>
Normal SuccessfulAttachVolume 63s attachdetach-controller AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-2e295adfce"Our volume has been successfully attached to the new deployment for MySQL instance.And you can repeat validation via the adminer interface as to whether the database csi-unity-test along with the table "car" and the data itself exists or not :)
SSnapshots: Next, we move on to the snapshot capabilities introduced in Kubernetes via the external snapshotter GA project. In this scenario, we will observe creation of snapshots for the persistent volume created above
Before we begin, let’s understand some key terms that are used in combination to create the snapshots
- VolumeSnapshotClass (storage class for creating snapshots)
- VolumeSnapshot (Snapshots that will target the above snapshot class)
- VolumeSnapshotContent (The actual snapshot content)
The above concepts can be understood in further details available at Volume Snapshots | Kubernetes
And now on to the important files that are used for creating these snapshots
unity-xt-nas-snapclass.yaml
(template to create the volume snapshot class)snapshot-sample.yaml
(template for volume snapshots)create-snapshot.sh
(handy script to create unique datetime based snapshots)
#LET'S CREATE VOLUME SNAPSHOT CLASScd $HOME/csi-unitycp $HOME/eks-anywhere/unity/unity-xt-nas-snapclass.yaml .more unity-xt-nas-snapclass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: unity-nas-snapclass
driver: csi-unity.dellemc.com
# Configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object
# it is bound to is to be deleted
# Allowed values:
# Delete: the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object.
# Retain: both the underlying snapshot and VolumeSnapshotContent remain.
# Default value: None
# Optional: false
# Examples: Delete
deletionPolicy: DeleteAs you can see this volume snapshot class leverages the unity csi driver to provision all the snapshotskubectl create -f unity-xt-nas-snapclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/unity-nas-snapclass createdkubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY AGE
unity-nas-snapclass csi-unity.dellemc.com Delete 38skubectl describe volumesnapshotclass unity-nas-snapclass
Name: unity-nas-snapclass
Namespace:
Labels: <none>
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Deletion Policy: Delete
Driver: csi-unity.dellemc.com
Kind: VolumeSnapshotClass
Metadata:
Creation Timestamp: 2022-05-20T08:31:21Z
Generation: 1
Managed Fields:
API Version: snapshot.storage.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:deletionPolicy:
f:driver:
Manager: kubectl-create
Operation: Update
Time: 2022-05-20T08:31:21Z
Resource Version: 816587
UID: 9c8bacf5-7bda-4e65-b6f4-c2aba02a6aed
Events: <none>At this stage we have the Volume Snapshot class created that can be used to create the Unity XT based snapshots
It is also important to understand the interaction of the below mentioned files, which are used to create the snapshots
# Navigate to the correct directorycd $HOME/csi-unity/mysql/standalone/unity-nas/more snapshot-sample.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: mysql-snapshot-unity-nas-datetime
spec:
volumeSnapshotClassName: unity-nas-snapclass
source:
persistentVolumeClaimName: mysql-pv-claim-unity-nas# The datetime text is automatically replaced by the below script to create unique snapshots# The template has two references
a) The volume snapshot class will be used to create the snapshots
b) The persistent volume claim with which the snapshot is associatedmore create-snapshot.sh
#!/bin/bash
NOW=$(date "+%Y%m%d%H%M%S")
rm -rf snapshot.yaml
cp snapshot-sample.yaml snapshot.yaml
sed -i "s/datetime/$NOW/g" snapshot.yaml
kubectl create -f snapshot.yamlAs you can see the above script will insert a unique datetime reference into the snapshot template and then use kubectl to create the unique snapshot
Next let’s begin to create the baseline snapshot. Note that at this stage, we have a database called csi-unity-test
with a table car
populated with dummy data
We will create the first baseline snapshot such that all the existing data is preserved in the VolumeSnapshotContent
object via the VolumeSnapshot
# Navigate to the correct directorycd $HOME/csi-unity/mysql/standalone/unity-nas/
chmod +x create-snapshot.shLet's verify if there are any existing snapshotskubectl get volumesnapshot
No resources found in default namespace.Next, we will execute the script./create-snapshot.sh
volumesnapshot.snapshot.storage.k8s.io/mysql-snapshot-unity-nas-20220520134942 createdAs you can see the snapshot is successfully createdkubectl get volumesnapshot
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
mysql-snapshot-unity-nas-20220520134942 true mysql-pv-claim-unity-nas 8Gi unity-nas-snapclass snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59 65s 38sThe volume snapshot creation has also resulted into a volume snapshot content that holds the actual datakubectl get volumesnapshotcontent
NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT VOLUMESNAPSHOTNAMESPACE AGE
snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59 true 8589934592 Delete csi-unity.dellemc.com unity-nas-snapclass mysql-snapshot-unity-nas-20220520134942 default 98s
Let’s describe the volume snapshot content and we will see an interesting observation that highlights the prefix of our cluster name in the Snapshot handle. If you recall we had altered our myvalues.yaml file during the initial steps for CSI installation to include a specific prefix for our volumes and snapshots
kubectl describe volumesnapshotcontent snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59
Name: snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59
Namespace:
Labels: <none>
Annotations: <none>
API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshotContent
Metadata:
Creation Timestamp: 2022-05-20T13:49:42Z
Finalizers:
snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection
Generation: 1
Managed Fields:
API Version: snapshot.storage.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection":
f:spec:
.:
f:deletionPolicy:
f:driver:
f:source:
.:
f:volumeHandle:
f:volumeSnapshotClassName:
f:volumeSnapshotRef:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:resourceVersion:
f:uid:
Manager: snapshot-controller
Operation: Update
Time: 2022-05-20T13:49:42Z
API Version: snapshot.storage.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:creationTime:
f:readyToUse:
f:restoreSize:
f:snapshotHandle:
Manager: csi-snapshotter
Operation: Update
Time: 2022-05-20T13:49:46Z
Resource Version: 1067690
UID: 0804c76e-5040-4870-9b0c-2c140e4b7d15
Spec:
Deletion Policy: Delete
Driver: csi-unity.dellemc.com
Source:
Volume Handle: testunitycsicluster01-vol-2e295adfce-NFS-virt2148drw2v6-fs_20
Volume Snapshot Class Name: unity-nas-snapclass
Volume Snapshot Ref:
API Version: snapshot.storage.k8s.io/v1
Kind: VolumeSnapshot
Name: mysql-snapshot-unity-nas-20220520134942
Namespace: default
Resource Version: 1067639
UID: 2ff09247-07d4-4eab-b9ed-fe4ac187ce59
Status:
Creation Time: 1653054555024000000
Ready To Use: true
Restore Size: 8589934592
Snapshot Handle: testunitycsicluster01-snap-2ff0924707-NFS-virt2148drw2v6-171798691875
Events: <none># Notice that the cluster name is prefixed in the Snapshot Handle
Next, we can see if this snapshot is actually reflected in the Unity-XT console
We can see 1 snapshot created against our persistent volume. Let’s see some more details by clicking on the count hyperlink. We can see the Snapshot Handle with our cluster prefix is visible against the snapshot. This helps us track our snapshots appropriately
Now we will test the restore capability via our baseline snapshot. To do so we will make some changes in our database named csi-unity-test. Head back to the adminer application via the External IP exposed and navigate to
Next select the “car” table and drop it to delete the data
Once this operation is executed, all our data in the database is deleted simulating a sort of corruption, etc.
Now for the obvious act of restoring our data by leveraging the baseline snapshot created above. To do so we have to create a new persistent volume claim that targets an existing VolumeSnapshot
object as a datasource
To do so, we already have a template located in the existing sub-directory for mysql called restore-pvc-sample.yaml
, which is executed via a small script called ./restore-pvc.sh
. The contents of the files are shown below
# Navigate to the correct directorycd $HOME/csi-unity/mysql/standalone/unity-nas/more restore-pvc-sample.yaml# DO NOT CHANGE DATASOURCE NAME, AS IT IS SET AUTOMATICALLY VIA THE SCRIPT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-restored-pv-claim-unity-nas
spec:
storageClassName: unity-nas
dataSource:
name: volumeSnapshotName <<< Script will change this value
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gimore restore-pvc.sh
#!/bin/bash
read -p 'volumeSnapshotName: ' volumeSnapshotName
rm -rf restore-pvc.yaml
cp restore-pvc-sample.yaml restore-pvc.yaml
sed -i "s/volumeSnapshotName/$volumeSnapshotName/g" restore-pvc.yaml
kubectl create -f restore-pvc.yaml
Now let’s recover the data for the deleted table “car” in the database “csi-unity-test”. This is done by resurrecting a new persistent volume claim from the previously created baseline snapshot.
# FIRST STEP: GET THE SNAPSHOT NAMEkubectl get volumesnapshot
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
mysql-snapshot-unity-nas-20220520134942 true mysql-pv-claim-unity-nas 8Gi unity-nas-snapclass snapcontent-2ff09247-07d4-4eab-b9ed-fe4ac187ce59 15h 15h# SECOND STEP: RUN THE PVC RESTORATION SCRIPT. It will prompt you for the snapshot name. Provide the value from the above step./restore-pvc.sh
volumeSnapshotName: mysql-snapshot-unity-nas-20220520134942kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim-unity-nas Bound testunitycsicluster01-vol-2e295adfce 8Gi RWO unity-nas 23h
mysql-restored-pv-claim-unity-nas Bound testunitycsicluster01-vol-a0faa7f465 8Gi RWO unity-nas 5s
As you can see from the above log, we have reconstructed a new persistent volume via the baseline snapshot.
Next, it’s time to use this persistent volume for our MySQL deployment and verify if the data is in-tact via the baseline snapshot. To do so., we will need to alter the persistent volume claim name in the mysql.yaml file as shown below
The persistent volume claim name is referenced as claimName in the last line of the mysql.yaml file. Edit the file and change it as per below valuePLEASE BE CAREFUL WITH YAML FORMATTING WHILE EDITING THE VALUEScd $HOME/csi-unity/mysql/standalone/unity-nas/Edit mysql.yaml# Original claimNameclaimName: mysql-pv-claim-unity-nas# New claimNameclaimName: mysql-restored-pv-claim-unity-naskubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
adminer 1/1 1 1 29h
mysql 1/1 1 1 28hkubectl delete deployment mysql
deployment.apps "mysql" deletedkubectl create -f mysql.yaml
deployment.apps/mysql created
Error from server (AlreadyExists): error when creating "mysql.yaml": services "mysql" already exists <<< IGNORE THIS ERROR AS WE ONLY DELETED THE DEPLOYMENTkubectl get pods
NAME READY STATUS RESTARTS AGE
adminer-5f64885f68-9wh7b 1/1 Running 0 29h
mysql-6c4598fccd-9wsc2 1/1 Running 0 2m22skubectl describe pod mysql-6c4598fccd-9wsc2<OUTPUT-SNIP>Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-restored-pv-claim-unity-nas
ReadOnly: falseNormal SuccessfulAttachVolume 2m55s attachdetach-controller AttachVolume.Attach succeeded for volume "testunitycsicluster01-vol-a0faa7f465"<OUTPUT-SNIP>
As you can see the persistent volume created from the snapshot has been attached to the newly created MySQL pod. Let’s verify the data by logging into the adminer web interface and navigating to the database csi-unity-test and selecting the table “car”
That’s it., hope you enjoyed and understood the end-to-end methodology of dealing with Dell EMC’s Unity-XT CSI implementation over EKS Anywhere clusters
cheers,
Ambar@thecloudgarage
#iwork4dell