Sock shop., a reference application architecture for various Kubernetes use-cases

Ambar Hassani
9 min readSep 16, 2022

--

This article is part of the EKS Anywhere series EKS Anywhere, extending the Hybrid cloud momentum | by Ambar Hassani | Apr, 2022 | Medium

Sock shop is an polyglot architectural pattern developed by weaveworks to demonstrate a microservices based deployment. There are a number of technologies that are implemented (SpringBoot, Go, REDIS, MYSQL, MongoDB, etc.). We will be using this application set to demonstrate various use cases on top of EKS Anywhere. Let’s observe sock-shop a little more to understand the architectural pattern.

Specific changes have been performed to the original templates created by weaveworks

  • All data services are persistent in nature
  • They are persisted over Dell’s PowerStore Array CSI
  • Frontend application is rendered via a SSL Ingress resource supported by NGINX Ingress controller
  • MetalLB serves as the Load Balancer

The deployment script and the entire set of YAML resources are included as a part of my EKS Anywhere GitHub repository and can be located here eks-anywhere/sock-shop at main · thecloudgarage/eks-anywhere (github.com)

Let’s begin to deploy this use-case by creating a fresh EKS Anywhere standalone workload cluster

CLUSTER_NAME=c4-eksa1
API_SERVER_IP=172.24.165.16
cd $HOME
cp $HOME/eks-anywhere/cluster-samples/cluster-sample.yaml $CLUSTER_NAME-eks-a-cluster.yaml
sed -i "s/workload-cluster-name/$CLUSTER_NAME/g" $HOME/$CLUSTER_NAME-eks-a-cluster.yaml
sed -i "s/management-cluster-name/$CLUSTER_NAME/g" $HOME/$CLUSTER_NAME-eks-a-cluster.yaml
sed -i "s/api-server-ip/$API_SERVER_IP/g" $HOME/$CLUSTER_NAME-eks-a-cluster.yaml
eksctl anywhere create cluster -f $HOME/$CLUSTER_NAME-eks-a-cluster.yaml
Performing setup and validations
✅ Connected to server
✅ Authenticated to vSphere
✅ Datacenter validated
✅ Network validated
✅ Datastore validated
✅ Folder validated
✅ Resource pool validated
✅ Datastore validated
✅ Folder validated
✅ Resource pool validated
✅ Datastore validated
✅ Folder validated
✅ Resource pool validated
✅ Control plane and Workload templates validated
Provided VSphereMachineConfig sshAuthorizedKey is not set or is empty, auto-generating new key pair...
VSphereDatacenterConfig private key saved to c4-eksa1/eks-a-id_rsa. Use 'ssh -i c4-eksa1/eks-a-id_rsa capv@<VM-IP-Address>' to login to your cluster VM
✅ Vsphere Provider setup is valid
✅ Create preflight validations pass
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing storage class on workload cluster
Installing cluster-api providers on workload cluster
Installing EKS-A secrets on workload cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing AddonManager and GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Writing cluster config file
Deleting bootstrap cluster
🎉 Cluster created!

Next we will deploy the PowerStore CSI drivers along with ISCSI support on the cluster

ubuntu@eksa-admin:~$ source eks-anywhere/powerstore/install-powerstore-csi-driver.sh
Enter Cluster Name on which CSI driver needs to be installed
clusterName: c4-eksa1
Enter IP or FQDN of the PowerStore array
ipOrFqdnOfPowerStoreArray: 172.24.185.106
Enter Global Id of the PowerStore Array
globalIdOfPowerStoreArray: PS4ebb8d4e8488
Enter username of the PowerStore Array
userNameOfPowerStoreArray: iac
Enter password of the PowerStore Array
passwordOfPowerStoreArray:
------------------------------------------------------
> Installing CSI Driver: csi-powerstore on 1.21
------------------------------------------------------
------------------------------------------------------
> Checking to see if CSI Driver is already installed
------------------------------------------------------
Skipping verification at user request
|
|- Installing Driver Success
|--> Waiting for Deployment powerstore-controller to be ready Success
|--> Waiting for DaemonSet powerstore-node to be ready Success
------------------------------------------------------
> Operation complete
------------------------------------------------------
storageclass.storage.k8s.io/powerstore-ext4 created

Next., we will create the MetalLB Load Balancer to expose our sock-shop application via the NGINX Ingress.

helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --wait --timeout 15m --namespace metallb-system --create-namespace

Release "metallb" does not exist. Installing it now.
W0711 06:55:51.507363 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0711 06:55:51.510195 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0711 06:55:51.606228 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W0711 06:55:51.607914 23991 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: metallb
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.
Next we will use the Custom Resources to create the IP address pools for MetalLB and advertise themcat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.24.165.21-172.24.165.25
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
EOF
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created

So as of this state, our EKS Anywhere cluster is ready to deploy the required resources (NGINX Ingress controller, Sock-shop related resources)

Lets create the sock-shop application by running the deployment script

ubuntu@eksa-admin:~/eks-anywhere/sock-shop$ source deploy-sockshop.sh 
fqdnOfSockShopFrontEnd: sockshop.thecloudgarage.com
Generating a RSA private key......................+++++
writing new private key to 'tls.key'
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
namespace/sock-shop created
secret/sockshop-tls created
persistentvolumeclaim/session-db created
deployment.apps/session-db created
service/session-db created
persistentvolumeclaim/carts-db created
deployment.apps/carts-db created
service/carts-db created
deployment.apps/carts created
service/carts created
persistentvolumeclaim/catalogue-db created
deployment.apps/catalogue-db created
service/catalogue-db created
deployment.apps/catalogue created
service/catalogue created
deployment.apps/front-end created
service/front-end created
persistentvolumeclaim/orders-db created
deployment.apps/orders-db created
service/orders-db created
deployment.apps/orders created
service/orders created
deployment.apps/payment created
service/payment created
deployment.apps/queue-master created
service/queue-master created
persistentvolumeclaim/rabbitmq created
deployment.apps/rabbitmq created
service/rabbitmq created
deployment.apps/shipping created
service/shipping created
persistentvolumeclaim/user-db created
deployment.apps/user-db created
service/user-db created
deployment.apps/user created
service/user created
ingress.networking.k8s.io/ingress-sockshop created

All the required resources are created, let’s verify the pods, services and persistent volumes

PODSubuntu@eksa-admin:~$ kubectl get pods -n sock-shop
NAME READY STATUS RESTARTS AGE
carts-5f8b647fb-46hvr 1/1 Running 0 2m25s
carts-db-8674749f79-vghnv 1/1 Running 0 2m29s
catalogue-66f4c8b475-4xs6r 1/1 Running 0 2m25s
catalogue-db-c85647c59-dcphd 1/1 Running 0 2m27s
front-end-d7f4db57d-sr6cw 1/1 Running 0 2m25s
orders-65f58594cc-bfh9m 1/1 Running 0 2m25s
orders-db-6b5445d847-c6fll 1/1 Running 0 2m26s
payment-7f9f778df7-ztkfr 1/1 Running 0 2m27s
queue-master-5f47d5c85d-tnm72 1/1 Running 0 2m25s
rabbitmq-58d7978598-ffdp2 1/1 Running 0 2m29s
session-db-7fbcfd88df-cp2n8 1/1 Running 0 2m25s
shipping-6c4b7df76c-g7d84 1/1 Running 0 2m25s
user-db-8465fbc8b7-twn2m 1/1 Running 0 2m27s
user-f658c5cf4-nt969 1/1 Running 0 2m30s
SERVICES
ubuntu@eksa-admin:~$ kubectl get svc -n sock-shop
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
carts ClusterIP 10.99.23.71 <none> 80/TCP 2m59s
carts-db ClusterIP 10.111.113.3 <none> 27017/TCP 2m59s
catalogue ClusterIP 10.110.255.231 <none> 80/TCP 2m59s
catalogue-db ClusterIP 10.103.209.66 <none> 3306/TCP 2m59s
front-end ClusterIP 10.97.221.199 <none> 80/TCP 2m59s
orders ClusterIP 10.108.86.202 <none> 80/TCP 2m59s
orders-db ClusterIP 10.100.66.39 <none> 27017/TCP 2m59s
payment ClusterIP 10.108.147.199 <none> 80/TCP 2m59s
queue-master ClusterIP 10.109.118.42 <none> 80/TCP 2m59s
rabbitmq ClusterIP 10.100.2.241 <none> 5672/TCP 2m59s
session-db ClusterIP 10.103.85.231 <none> 6379/TCP 2m59s
shipping ClusterIP 10.107.0.220 <none> 80/TCP 2m59s
user ClusterIP 10.96.85.19 <none> 80/TCP 2m59s
user-db ClusterIP 10.105.191.111 <none> 27017/TCP 2m59s
PERSISTENT VOLUMES OVER POWERSTORE
ubuntu@eksa-admin:~$ kubectl get pvc -n sock-shop
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
carts-db Bound c4-eksa1-vol-8e773b5651 8Gi RWO powerstore-ext4 4m24s
catalogue-db Bound c4-eksa1-vol-ac86908b5f 8Gi RWO powerstore-ext4 4m24s
orders-db Bound c4-eksa1-vol-7dd2ce8407 8Gi RWO powerstore-ext4 4m24s
rabbitmq Bound c4-eksa1-vol-e8a41caff8 8Gi RWO powerstore-ext4 4m24s
session-db Bound c4-eksa1-vol-f7d2e11fbb 8Gi RWO powerstore-ext4 4m24s
user-db Bound c4-eksa1-vol-ca6462d1ab 8Gi RWO powerstore-ext4 4m24s
INGRESS RESOURCE
ubuntu@eksa-admin:~$ kubectl get ingress -n sock-shop
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-sockshop nginx sockshop.thecloudgarage.com 172.24.165.21 80, 443 5m2s

As we can see from the above output, our ingress resource for the hostname sockshop.thecloudgarage.com has been provided an IP address of 172.24.165.21. I have created a DNS entry for this host and IP mapping to access the sock-shop application.

Let’s also verify the persistent volumes in our PowerStore web console

We can see a total of 6 volumes also listed in our PowerStore console starting with the prefix c4-eksa1. Next, we will observe the support for volume expansion support in Dell PowerStore CSI using External-resizer controller. Imagine a business scenario, wherein a big day sale expects a greater than usual number of users indicating more number of concurrent user sessions. To build on this scenario, let’s edit one of the persistent volumes associated with our sock-shop data service named session-db

KUBE_EDITOR="nano" kubectl edit pvc session-db -n sock-shop

It will open up the nano editor and we can change the size of the persistent volume from a current value of 8GB to a new value of 10GB

kubectl describe pvc session-db -n sock-shopSnippet of the output log for the events indicates the resizingNormal   Resizing                    75s   external-resizer csi-powerstore.dellemc.com  External resizer is resizing volume c4-eksa1-vol-f7d2e11fbb
Normal FileSystemResizeSuccessful 7m52s kubelet MountVolume.NodeExpandVolume succeeded for volume "c4-eksa1-vol-f7d2e11fbb"
kubectl get pvc session-db -n sock-shop
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
session-db Bound c4-eksa1-vol-f7d2e11fbb 10Gi RWO powerstore-ext4 8m4s

We can verify the same in the PowerStore web console. The session-db pvc now is seen at 10GB.

Next., lets scale our front-end deployment replicas from a current value of 1 to new value of 3.

KUBE_EDITOR="nano" kubectl edit deployment front-end -n sock-shop
kubectl get pods -n sock-shop --selector=name=front-end
NAME READY STATUS RESTARTS AGE
front-end-d7f4db57d-fzl5b 1/1 Running 0 2m59s
front-end-d7f4db57d-ntdf4 1/1 Running 0 2m59s
front-end-d7f4db57d-sr6cw 1/1 Running 0 10m37s

Next, let’s understand and validate the sock-shop application workflow at a high level

Let’s register a new user invoking user microservice and user-db

Once registration is successfully completed, we can click on catalogue tab invoking the catalogue microservice and catalogue-db

We select a particular item in the catalog to view further details and add it to the cart invoking the carts microservice and carts-db. In parallel, the order service is also invoked to calculate the total price along with additional costs.

Before proceeding to place the order, we will need to update our address and credit card information invoking the user microservice and user-db

Finally, upon checkout the order microservice, orders-db, queue-processor along with shipping microservice get invoked to place and display the ordered item.

Now for an obvious yet important check, i.e. persistence of data services. Let’s delete the pod for the orders-db and upon recreation, we will browse and verify if the above order-id still persists.

kubectl delete pod orders-db-6b5445d847-c6fll -n sock-shop
pod "orders-db-6b5445d847-c6fll" deleted

We can see a new pod has been recreated and the persistent volume has been re-attached.

kubectl get pods -n sock-shop --selector=name=orders-db
NAME READY STATUS RESTARTS AGE
orders-db-6b5445d847-m6k8l 1/1 Running 0 50s
kubectl describe pod orders-db-6b5445d847-m6k8l -n sock-shop<OUTPUT-SNIP>
Events:
Type Reason Age From Message

Normal SuccessfulAttachVolume 2m18s attachdetach-controller AttachVolume.Attach succeeded for volume "c4-eksa1-vol-7dd2ce8407"

Upon re-login into the account that was created, I can still see my order ID exists in the account. This indicates the persistence of orders-db

That brings us to an end of this article highlighting this simple but powerful use case demonstrating viability of deploying cloud native applications with ease on EKS Anywhere.

cheers,

Ambar@thecloudgarage

#iwork4dell

--

--

Ambar Hassani
Ambar Hassani

Written by Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies

No responses yet