EKS Anywhere, securely accessing AWS public cloud services via IRSA

Ambar Hassani
8 min readMay 31, 2024

--

Ever come across a scenario, where you want to securely access AWS services from your on-premises Kubernetes clusters. This can be daunting; however, the benefits of this complexity are enormous and secure by design. While there are many variations of how this can be achieved, one that stands out is IRSA — IAM Roles for Service Accounts. Sadly, most documentation on internet focuses on deploying IRSA on EKS public cloud.

In this blog, we look at EKS Anywhere as an instantiation of non-public-cloud-EKS Kubernetes deployment and observe how we can securely access AWS public cloud services via IRSA.

For starters, let me answer a couple of quick questions that may be helpful

Can I deploy IRSA on new and/or existing EKS-Anywhere clusters?

Answer: Yes

Can IRSA be deployed on virtualized EKS-Anywhere clusters and/or bare-metal clusters?

Answer: Yes

In the documentation, it mentions that I have to store my Kubernetes keys as a public object in S3. Does it pose a security risk

Answer: No, we are only placing the public key of the EKS-Anywhere cluster

Let’s delve into the practicalities of deploying IRSA into EKS Anywhere cluster. For starters, the below visualization provides a high-level overview of this would work.

Procedure

Practical advice: Ensure that the paths are adjusted accordingly for the directories, etc. Herein we are assuming that the starting point for the exercise is $HOME on a ubuntu machine. In addition, the cluster directory created by EKS-Anywhere is also directly under $HOME, i.e. $HOME/$CLUSTER_NAME

Ideally this should be done from the EKS Anywhere Admin machine which should have aws-cli installed on it.

  • Install go
curl -OL https://golang.org/dl/go1.16.7.linux-amd64.tar.gz
sha256sum go1.16.7.linux-amd64.tar.gz
sudo tar -C /usr/local -xvf go1.16.7.linux-amd64.tar.gz
sudo nano ~/.profile
export PATH=$PATH:/usr/local/go/bin
source ~/.profile

What are we achieving via the below commands:

  • Create a S3 bucket to store our OIDC discovery information. This bucket will be accessed by the OIDC identity provider configuration in AWS IAM
  • Create the OIDC identity provider configuration in AWS IAM
  • Create a trust for our EKS-Anywhere Kubernetes service account to assume an IAM role via AWS STS
  • Allow a certain set of service and actions (example S3ReadOnlyAccess) permitted via the above role assumption
CLUSTER_NAME=eksaclustertest001
timestamp=$(date +%s)
export AWS_ACCESS_KEY_ID=replace-aws-key-id
export AWS_SECRET_ACCESS_KEY=replace-aws-secret-access-key
export AWS_DEFAULT_REGION=replace-aws-default-region
export S3_BUCKET=$CLUSTER_NAME-irsa-oidc-$timestamp
#
export HOSTNAME=s3-$AWS_DEFAULT_REGION.amazonaws.com
export ISSUER_HOSTPATH=$HOSTNAME/$S3_BUCKET
#
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3api create-bucket --bucket $S3_BUCKET --create-bucket-configuration LocationConstraint=$AWS_DEFAULT_REGION --object-ownership BucketOwnerPreferred
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3api delete-public-access-block --bucket $S3_BUCKET
#
cd $HOME
mkdir -p $HOME/eksa-irsa/$CLUSTER_NAME
cd $HOME/eksa-irsa/$CLUSTER_NAME
git clone https://github.com/thecloudgarage/aws-irsa-example.git
git clone https://github.com/aws/amazon-eks-pod-identity-webhook.git
#
cd $HOME/eksa-irsa/$CLUSTER_NAME/aws-irsa-example
#
cat <<EOF > discovery.json
{
"issuer": "https://$ISSUER_HOSTPATH/",
"jwks_uri": "https://$ISSUER_HOSTPATH/keys.json",
"authorization_endpoint": "urn:kubernetes:programmatic_authorization",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
],
"claims_supported": [
"sub",
"iss"
]
}
EOF
#
CA_THUMBPRINT=$(openssl s_client -connect s3-$AWS_DEFAULT_REGION.amazonaws.com:443 -servername s3-$AWS_DEFAULT_REGION.amazonaws.com -showcerts < /dev/null 2>/dev/null | openssl x509 -in /dev/stdin -sha1 -noout -fingerprint | cut -d '=' -f 2 | tr -d ':')

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam create-open-id-connect-provider \
--url https://$ISSUER_HOSTPATH \
--thumbprint-list $CA_THUMBPRINT \
--client-id-list sts.amazonaws.com

echo "The service-account-issuer as below:"
echo "https://$ISSUER_HOSTPATH"
#
# CREATE THE IAM ROLE FOR TRUST
# NOTE THAT WE ARE GRANTING TRUST TO THE SERVICE ACCOUNTS THAT WILL BE
# CREATED IN DEFAULT NAMESPACE OF THE EKS-A CLUSTER
# THIS CAN BE ALTERED AT THIS STAGE OR EVEN LATER
# IN THE FOLLOWING EXAMPLES, WE WILL ADD MORE NAMESPACES
#
ACCOUNT_ID=$(AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws sts get-caller-identity --query Account --output text)
PROVIDER_ARN="arn:aws:iam::$ACCOUNT_ID:oidc-provider/$ISSUER_HOSTPATH"
ROLE_NAME=s3-ops

cat > irp-trust-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "$PROVIDER_ARN"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${ISSUER_HOSTPATH}:sub": "system:serviceaccount:default:${ROLE_NAME}"
}
}
}
]
}
EOF
#
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam create-role \
--role-name $ROLE_NAME \
--assume-role-policy-document file://irp-trust-policy.json

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam update-assume-role-policy \
--role-name $ROLE_NAME \
--policy-document file://irp-trust-policy.json

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam attach-role-policy \
--role-name $ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

Create a new OR update an existing EKS Anywhere cluster for IRSA

With IRSA for non-EKS based clusters, there is generally a reconfiguration of Kubernetes API server parameters as noted below. Luckily EKS Anywhere provides us a simple cluster spec configuration attribute called podIamConfig, which will automatically configure the API server with valid values.

# api-server attributes generally associated with IRSA on Kubernetes clusters
--service-account-key-file
--service-account-signing-key-file
--api-audiences
--service-account-issuer

To do so, we will edit the Cluster configuration template (in case of new clusters) OR the actual configuration file (for existing clusters) usually located under $CLUSTER_NAME directory.

One can do this manually or use the below sed command.

Herein, the $ISSUER_HOSTPATH variable has already been derived in the previous set of commands. Use that variable in case you are manually editing., else the sed command automatically takes care of it.

cd $HOME/$CLUSTER_NAME/
sed -i '/spec:$/r'<(
echo " podIamConfig:"
echo " serviceAccountIssuer: https://$ISSUER_HOSTPATH"
) $CLUSTER_NAME-eks-a-cluster.yaml
cd $HOME

In either case, the cluster spec should look like the below snippet. Note that the value used for serviceAccountIssuer is actually a variable input derived via https://ISSUER_HOSTPATH, wherein the ISSUER_HOSTPATH variable is already declared in the above steps.

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
annotations:
anywhere.eks.amazonaws.com/managed-by-cli: "true"
anywhere.eks.amazonaws.com/management-components-version: v0.19.5
name: eksa-test-cluster-1
namespace: default
spec:
podIamConfig:
serviceAccountIssuer: https://s3-eu-west-1.amazonaws.com/eksa-test-cluster-1-irsa-oidc-xxxxxxxxx

Proceed with the next step, once your cluster template (new clusters) or the cluster configuration file (existing clusters) resembles the above

NEXT., IS A VERY IMPORTANT STEP THAT DEPENDS ON WHETHER YOU ARE DEPLOYING IRSA ON AN EXISTING EKS-A CLUSTER OR CREATING A NEW EKS-A CLUSTER WITH IRSA ENABLED. SO., PAY ATTENTION!!!!!

  • Creating a NEW!!!!!! cluster with IRSA enabled:
eksctl create cluster <blah, blah, bhah>

# EXECUTE THE BELOW COMMANDS AFTER CREATING A NEW EKS-A CLUSTER
# ENSURE YOUR KUBECTL IS ALBLE TO COMMUNICATE WITH THE NEW EKS-A CLUSTER BEFORE PROCEEDING

#Navigate to the correct directory
cd $HOME/eksa-irsa/$CLUSTER_NAME/aws-irsa-example

#Generate the public keys for the cluster
kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub
go run ./main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json

#Upload discover.json and keys.json to s3
#Note discovery.json was created in previous steps
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json
  • Updating an EXISTING!!!!!! EKS-A cluster for IRSA
# FIRST EXECUTE THE BELOW COMMANDS AND THEN UPGRADE THE CLUSTER

#Navigate to the correct directory
cd $HOME/eksa-irsa/$CLUSTER_NAME/aws-irsa-example

#Generate the public keys for the cluster
kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub
go run ./main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json

#Upload discover.json and keys.json to s3
#Note discovery.json was created in previous steps
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json

# ONCE THE ABOVE COMMANDS ARE EXECUTED, UPGRADE THE EXISTING EKS-A CLUSTER
# EXAMPLE:

cd $HOME
eksctl anywhere upgrade plan cluster -f $HOME/$CLUSTER_NAME/$CLUSTER_NAME-eks-a-cluster.yaml
eksctl anywhere upgrade cluster -f $HOME/$CLUSTER_NAME/$CLUSTER_NAME-eks-a-cluster.yaml

In both of the above cases, verify if your system pods are running fine and there are no crashes or errors.

COMPLETE THE IRSA CONFIGURATIONS

The below steps will create the Pod-identity-webhook and a service account with annotations in the default namespace.

The Pod identity webhook is a part of the AWS IRSA and it is open sourced by AWS to be used in any Kubernetes deployment. This webhook mutates pods with a ServiceAccount with an eks.amazonaws.com/role-arn annotation by adding a ServiceAccount projected token volume and adding environment variables that configure the AWS SDKs to automatically assume the specified role

# DEPLOY THE POD IDENTITY WEBHOOK

cd $HOME/eksa-irsa/$CLUSTER_NAME/amazon-eks-pod-identity-webhook
make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest

# CREATE THE SERVICE ACCOUNT IN DEFAULT NAMESPACE

S3_ROLE_ARN=$(AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam get-role --role-name $ROLE_NAME --query Role.Arn --output text)

kubectl create sa $ROLE_NAME
kubectl annotate sa $ROLE_NAME eks.amazonaws.com/role-arn=$S3_ROLE_ARN

MOMENTS OF TRUTH: TEST AND VALIDATE

Scenario-1: Test pod configured in a default namespace and associated with a service account that is trusted to assume role limited to S3 Read only access

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: eksa-irsa-test
spec:
serviceAccountName: $ROLE_NAME
containers:
- name: eks-irsa-test
image: amazon/aws-cli:latest
command: ["sleep"]
args: ["3600"]
restartPolicy: Never
EOF
  • Verify S3 access from the pod
kubectl exec -it eksa-irsa-test /bin/bash
aws s3 ls

Scenario-2: Test pod in a non-default namespace and an associated service account that is trusted to assume role restricted to S3 Read Only

  • Update the trust associated with the IAM role including the newer namespace and role to be assumed
  • Create the new service account in the non-default namespace
cd $HOME/eksa-irsa/$CLUSTER_NAME/aws-irsa-example
NEW_NAMESPACE1=test1
kubectl create ns $NEW_NAMESPACE1
cat > irp-trust-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "$PROVIDER_ARN"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${ISSUER_HOSTPATH}:sub": [
"system:serviceaccount:default:${ROLE_NAME}",
"system:serviceaccount:$NEW_NAMESPACE1:${ROLE_NAME}"
]
}
}
}
]
}
EOF
#
AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam update-assume-role-policy --role-name $ROLE_NAME --policy-document file://irp-trust-policy.json
#
kubectl create sa $ROLE_NAME -n $NEW_NAMESPACE1
kubectl annotate sa $ROLE_NAME -n $NEW_NAMESPACE1 eks.amazonaws.com/role-arn=$S3_ROLE_ARN
  • Deploy the test pod in the non-default namespace
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: eksa-irsa-test
namespace: $NEW_NAMESPACE1
spec:
serviceAccountName: $ROLE_NAME
containers:
- name: eks-irsa-test
image: amazon/aws-cli:latest
command: ["sleep"]
args: ["3600"]
restartPolicy: Never
EOF

Verify S3 access

kubectl exec -it eksa-irsa-test -n $NEW_NAMESPACE1 /bin/bash
aws s3 ls

SCENARIO-3: Elevate the actions permissions from S3 Read only to S3 full access for both service accounts (default namespace and non-default namespace)

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam attach-role-policy \
--role-name $ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess

Login to any or both of the pods either in default or non-default namespace and try to create/delete a bucket

kubectl exec -it eksa-irsa-test /bin/bash

S3_BUCKET=irsa-test-$((RANDOM))
echo $S3_BUCKET
AWS_DEFAULT_REGION=eu-west-1
aws s3api create-bucket --bucket $S3_BUCKET --create-bucket-configuration LocationConstraint=$AWS_DEFAULT_REGION --object-ownership BucketOwnerPreferred

# Verify if bucket can be created
# Next let's delete the bucket

aws s3api delete-bucket --bucket $S3_BUCKET

# Verify if bucket can be deleted
  • Delete the resources and unwind

# Delete the pod identity configurations on the cluster

kubectl delete pod eksa-irsa-test
kubectl delete pod eksa-irsa-test -n $NEW_NAMESPACE1
kubectl delete deployment pod-identity-webhook
kubectl delete service pod-identity-webhook
kubectl delete sa s3-ops
kubectl delete sa s3-ops -n $NEW_NAMESPACE1
kubectl delete secret pod-identity-webhook-cert


# Delete the IAM role for IRSA in AWS

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam detach-role-policy \
--role-name $ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY aws iam detach-role-policy \
--role-name $ROLE_NAME \
--policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
aws iam delete-role \
--role-name $ROLE_NAME

# Delete the Open ID Connect provider configuration in AWS


AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
aws iam delete-open-id-connect-provider \
--open-id-connect-provider-arn $PROVIDER_ARN

# Delete the S3 bucket created for Open ID Connect Provider

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
aws s3 rm s3://$S3_BUCKET --recursive

AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
aws s3api delete-bucket --bucket $S3_BUCKET

That will be a good closure to this blog with all relevant resources deleted.

Hope the write-up provides for a validated set of insights that might be helpful for the wider community.

cheers,

Ambar@thecloudgarage

#iwork4dell

--

--

Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies