Data on Kubernetes with PowerFlex CSI., IAM-as-a-service with Keycloak

Ambar Hassani
9 min readJan 22, 2024

Keycloak is a highly versatile system rendering a broad range of services when it comes to Identity access and management. The sheer maturity and adoption of Keycloak makes it a ubiquitous choice for Single-signon and federated identities. Even more, when deployed on Kubernetes, Keycloak is provided with the agility through cloud-nativeness and renders multiple advantages over traditional deployments.

In this blog, we will delve into the specifics of deploying a Keycloak based IAM-as-a-service on Kubernetes with Dell PowerFlex CSI and NGINX Ingress controller.

To begin, I have an EKS Anywhere cluster which generally is used as a shared-services-cluster. In other words, centralized multi-cluster or typical enterprise-wide used services can all be collocated on this single cluster.

This cluster is already deployed with MetalLB, Kubernetes community-based NGINX Ingress controller and PowerFlex CSI. I also have a DNS entry that resolves the intended FQDN keycloak.oidc.thecloudgarage.com to 10.204.111.56, which is the IP address for my NGINX Ingress controller.

kubectl get nodes
NAME STATUS ROLES AGE VERSION
m1-dzqkb Ready control-plane 6d16h v1.25.11-eks-984f31e
m1-fxjpz Ready control-plane 6d16h v1.25.11-eks-984f31e
m1-md-0-6764569c55xn584c-6jptb Ready <none> 6d16h v1.25.11-eks-984f31e
m1-md-0-6764569c55xn584c-rv9qs Ready <none> 6d16h v1.25.11-eks-984f31e
m1-xc6tr Ready control-plane 6d16h v1.25.11-eks-984f31e

CSI DRIVER FOR DELL PowerFlex 4.5 Software Defined Storage
kubectl get pods -n vxflexos
NAME READY STATUS RESTARTS AGE
vxflexos-controller-565dc6ff-5kcmk 5/5 Running 0 3m25s
vxflexos-controller-565dc6ff-snjfr 5/5 Running 0 3m25s
vxflexos-node-6jnjr 2/2 Running 0 3m25s
vxflexos-node-fdzz4 2/2 Running 0 3m25s

kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
powerflex-sc (default) csi-vxflexos.dellemc.com Delete WaitForFirstConsumer true 4m11s

kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
metallb-controller-6575c65fcc-ft7r7 1/1 Running 0 6d14h
metallb-speaker-d78jt 4/4 Running 0 6d14h
metallb-speaker-f79j7 4/4 Running 0 6d14h
metallb-speaker-glcqm 4/4 Running 0 6d14h
metallb-speaker-pqtb4 4/4 Running 0 6d14h
metallb-speaker-xprw2 4/4 Running 0 6d14h

kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-nv7r6 0/1 Completed 0 6d13h
ingress-nginx-admission-patch-pb7sl 0/1 Completed 1 6d13h
ingress-nginx-controller-5d84c98ffb-cmlc2 1/1 Running 0 6d13h

kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.97.254.26 10.204.111.56 80:31636/TCP,443:32572/TCP 6d13h
ingress-nginx-controller-admission ClusterIP 10.109.42.220 <none> 443/TCP 6d13h

nslookup keycloak.oidc.thecloudgarage.com
Server: 127.0.0.53
Address: 127.0.0.53#53

Non-authoritative answer:
Name: keycloak.oidc.thecloudgarage.com
Address: 10.204.111.56

One important inclusion is to set the proxy buffer parameters in the NGINX Ingress controller. These are required for Keycloak redirects. One can always refer to this example YAML for my NGINX Ingress controller.

  allow-snippet-annotations: "true"
proxy-buffer-size: 32k
proxy-buffers: 4 32k
proxy-read-timeout: "600"

In addition, I have a directory named common-certs and a wild-card certificate that is typically used for all Ingress services as TLS secret.

Let’s begin!

There are multiple HELM chart providers for Keycloak. In this example, we are going to use the one from codecentric. You can refer to the values file for more references. helm-charts/charts/keycloak/values.yaml at master · codecentric/helm-charts (github.com)

Prepare Helm values for Keycloak.


rm -rf $HOME/identity
mkdir -p $HOME/identity
cd $HOME/identity

cat <<EOF > $HOME/identity/helm-values-keycloak.yaml
extraEnv: |
- name: KEYCLOAK_LOGLEVEL
value: DEBUG
- name: KEYCLOAK_USER
value: admin
- name: KEYCLOAK_PASSWORD
value: admin@12345678
- name: PROXY_ADDRESS_FORWARDING
value: "true"

args:
- -Dkeycloak.profile.feature.docker=enabled

ingress:
enabled: true
ingressClassName: nginx
rules:
- host: keycloak.oidc.thecloudgarage.com
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- keycloak.oidc.thecloudgarage.com
secretName: thecloudgarage-tls

postgresql:
enabled: true
postgresqlPassword: asdfaso97sadfjylfasdsf78
EOF

Deploy Keycloak with the above Helm values file. Note we are using powerflex-sc as the storage class which uses Dell’s PowerFlex 4.5 software defined storage via the CSI integration.


kubectl create ns identity

cd $HOME/identity
kubectl create secret tls thecloudgarage-tls -n identity --key $HOME/common-certs/tls.key --cert $HOME/common-certs/tls.crt

helm repo add codecentric https://codecentric.github.io/helm-charts
helm repo update

helm upgrade --install keycloak codecentric/keycloak \
--set postgresql.global.storageClass="powerflex-sc" \
--values $HOME/identity/helm-values-keycloak.yaml \
--namespace identity

Output log

Release "keycloak" does not exist. Installing it now.
NAME: keycloak
LAST DEPLOYED: Mon Jan 22 18:20:19 2024
NAMESPACE: identity
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
***********************************************************************
* *
* Keycloak Helm Chart by codecentric AG *
* *
***********************************************************************

Keycloak was installed with an Ingress and can be reached at the following URL(s):

- https://keycloak.oidc.thecloudgarage.com/

Let’s look at the pods and persistent volume.

1 pod each., we can increase the count using the helm values.

kubectl get pods -n identity
NAME READY STATUS RESTARTS AGE
keycloak-0 1/1 Running 0 110s
keycloak-postgresql-0 1/1 Running 0 110s

kubectl get pvc -n identity
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-keycloak-postgresql-0 Bound m1-vol-69d6a1bd73 8Gi RWO powerflex-sc 2m46s

kubectl get pv -n identity
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
m1-vol-69d6a1bd73 8Gi RWO Delete Bound identity/data-keycloak-postgresql-0 powerflex-sc 3m12s

We can also observe the volume within PowerFlex 4.5 manager via it’s web console.

Let’s validate the ingress for the Keycloak server

kubectl get ingress -n identity
NAME CLASS HOSTS ADDRESS PORTS AGE
keycloak nginx keycloak.oidc.thecloudgarage.com 10.204.111.56 80, 443 5m27s

Since I already have the DNS entry performed for the above mapping, we can go ahead and access the Keycloak url

We can login using admin/admin@12345678 (set in the helm values file)

As of this stage, we have only the default “MASTER” realm available. We will use Terraform to:

  • create a new oidc realm named “oidc1”
  • 3 groups and 3 users along with mappings

Apply the terraform template

cd $HOME/identity
mkdir -p $HOME/identity/keycloak-config
cd $HOME/identity/keycloak-config
cat <<EOF > initial-config.tf
#Initial config
terraform {
required_providers {
keycloak = {
source = "mrparkers/keycloak"
version = "3.6.0"
}
}
}
# configure keycloak provider
provider "keycloak" {
client_id = "admin-cli"
username = "admin"
password = "admin@12345678"
url = "https://keycloak.oidc.thecloudgarage.com"
tls_insecure_skip_verify = true
}
locals {
realm_id = "oidc1"
groups = ["level1", "level2", "level3"]
user_groups = {
ambar-superadmin = ["level1"]
ambar-groupadmin = ["level2"]
ambar-dev = ["level3"]
}
}
# create basic realm
resource "keycloak_realm" "realm" {
realm = local.realm_id
enabled = true
verify_email = true
}
# create groups
resource "keycloak_group" "groups" {
for_each = toset(local.groups)
realm_id = local.realm_id
name = each.key
}
# create users
# note the use of backtick else cat EOF results in bad substitution
resource "keycloak_user" "users" {
for_each = local.user_groups
realm_id = local.realm_id
username = each.key
enabled = true
email = "\${each.key}@thecloudgarage.com"
email_verified = true
first_name = each.key
last_name = each.key
initial_password {
value = each.key
}
}
# configure use groups membership
resource "keycloak_user_groups" "user_groups" {
for_each = local.user_groups
realm_id = local.realm_id
user_id = keycloak_user.users[each.key].id
group_ids = [for g in each.value : keycloak_group.groups[g].id]
}
# create groups openid client scope
resource "keycloak_openid_client_scope" "groups" {
realm_id = local.realm_id
name = "groups"
include_in_token_scope = true
gui_order = 1
}
resource "keycloak_openid_group_membership_protocol_mapper" "groups" {
realm_id = local.realm_id
client_scope_id = keycloak_openid_client_scope.groups.id
name = "groups"
claim_name = "groups"
full_path = false
}
EOF
terraform init && terraform plan && terraform apply -auto-approve

#1st apply will create the realm but will error for other resources

sleep 10

terraform plan && terraform apply -auto-approve

Note that I have specifically mentioned at the end of the template above that the first run will error as the oidc client creation takes some time. So we insert a sleep 10 command and re-run the plan and apply command.

After the first run, below is the error. Don’t fret! the sleep and re-run of commands will successfully deploy.

ERROR ON THE FIRST RUN AS THE OIDC CLIENT IS STILL BEING CREATED...
DONT FRET!!! LET THE WHOLE THING PROGRESS TILL END OF SUCCESSFUL COMPLETION.

│ Error: error sending POST request to /auth/admin/realms/oidc1/groups: 404 Not Found. Response body: {"error":"Realm not found."}

│ with keycloak_group.groups["level1"],
│ on initial-config.tf line 34, in resource "keycloak_group" "groups":
│ 34: resource "keycloak_group" "groups" {



│ Error: error sending POST request to /auth/admin/realms/oidc1/groups: 404 Not Found. Response body: {"error":"Realm not found."}

│ with keycloak_group.groups["level3"],
│ on initial-config.tf line 34, in resource "keycloak_group" "groups":
│ 34: resource "keycloak_group" "groups" {



│ Error: error sending POST request to /auth/admin/realms/oidc1/groups: 404 Not Found. Response body: {"error":"Realm not found."}

│ with keycloak_group.groups["level2"],
│ on initial-config.tf line 34, in resource "keycloak_group" "groups":
│ 34: resource "keycloak_group" "groups" {



│ Error: error sending POST request to /auth/admin/realms/oidc1/users: 404 Not Found. Response body: {"error":"Realm not found."}

│ with keycloak_user.users["ambar-groupadmin"],
│ on initial-config.tf line 41, in resource "keycloak_user" "users":
│ 41: resource "keycloak_user" "users" {



│ Error: error sending POST request to /auth/admin/realms/oidc1/users: 404 Not Found. Response body: {"error":"Realm not found."}

│ with keycloak_user.users["ambar-superadmin"],
│ on initial-config.tf line 41, in resource "keycloak_user" "users":
│ 41: resource "keycloak_user" "users" {



│ Error: error sending POST request to /auth/admin/realms/oidc1/users: 404 Not Found. Response body: {"error":"Realm not found."}

│ with keycloak_user.users["ambar-dev"],
│ on initial-config.tf line 41, in resource "keycloak_user" "users":
│ 41: resource "keycloak_user" "users" {



│ Error: error sending POST request to /auth/admin/realms/oidc1/client-scopes: 404 Not Found. Response body: {"error":"Realm not found."}

│ with keycloak_openid_client_scope.groups,
│ on initial-config.tf line 62, in resource "keycloak_openid_client_scope" "groups":
│ 62: resource "keycloak_openid_client_scope" "groups" {


keycloak_realm.realm: Refreshing state... [id=oidc1]

<SNIP>
<SNIP>

Plan: 11 to add, 0 to change, 0 to destroy.
keycloak_group.groups["level2"]: Creating...
keycloak_user.users["ambar-dev"]: Creating...
keycloak_openid_client_scope.groups: Creating...
keycloak_group.groups["level1"]: Creating...
keycloak_group.groups["level3"]: Creating...
keycloak_user.users["ambar-superadmin"]: Creating...
keycloak_user.users["ambar-groupadmin"]: Creating...
keycloak_group.groups["level2"]: Creation complete after 0s [id=f3d58985-59b5-46ea-b3e5-d4cedeba45f7]
keycloak_group.groups["level1"]: Creation complete after 0s [id=13727073-d024-4148-b5b3-afdbca57b485]
keycloak_group.groups["level3"]: Creation complete after 0s [id=8ab56a4b-2b09-4fea-aaa4-be7829662902]
keycloak_openid_client_scope.groups: Creation complete after 1s [id=3de6f83e-2ba2-4422-a5a6-5c9442874a2b]
keycloak_openid_group_membership_protocol_mapper.groups: Creating...
keycloak_openid_group_membership_protocol_mapper.groups: Creation complete after 0s [id=c791b60d-b98e-4fa1-8014-9956e3b3677a]
keycloak_user.users["ambar-superadmin"]: Creation complete after 1s [id=85273288-2b21-488b-8f83-d4f068b934f5]
keycloak_user.users["ambar-groupadmin"]: Creation complete after 1s [id=f8d6f466-dbda-4b42-9a03-7bc1c3b2d0ed]
keycloak_user.users["ambar-dev"]: Creation complete after 1s [id=b69a6b0a-897c-4ba9-8f3e-470947f7c8d2]
keycloak_user_groups.user_groups["ambar-dev"]: Creating...
keycloak_user_groups.user_groups["ambar-superadmin"]: Creating...
keycloak_user_groups.user_groups["ambar-groupadmin"]: Creating...
keycloak_user_groups.user_groups["ambar-superadmin"]: Creation complete after 0s [id=oidc1/85273288-2b21-488b-8f83-d4f068b934f5]
keycloak_user_groups.user_groups["ambar-groupadmin"]: Creation complete after 0s [id=oidc1/f8d6f466-dbda-4b42-9a03-7bc1c3b2d0ed]
keycloak_user_groups.user_groups["ambar-dev"]: Creation complete after 0s [id=oidc1/b69a6b0a-897c-4ba9-8f3e-470947f7c8d2]

Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

We can verify the same in Keycloak UI.

New realm named “oidc1” has been created

Three groups in this realm are also created

Three users have also been created

Group and user mappings are also done.

And there we have it., keycloak deployed on Kubernetes with persistence and ingress.

If you are familiar with Keycloak, get going with configuring further realms/clients and creating central SSO and Identity-management-as-a-service. For now, that’s a wrap., we will see how to secure various entities via Keycloak in other blogs.

Happy kub’ing

cheers,

Ambar@thecloudgarage

#iwork4dell

--

--

Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies