Dell Infrastructure-as-a-code: Using Terraform to create and map PowerStore iSCSI block volumes.

Ambar Hassani
7 min readMar 12, 2024

This blog is part of the Dell Infrastructure-as-a-code saga series.

In my previous article, we had observed a terraform method to create volumes on Dell’s PowerStore platform. But why limit the terraforming abilities to just create volumes when you can extend the routines to even map them to the ubuntu machine configured with an iSCSI client.

In this blog, we will do exactly that. I have a PowerStore storage from Dell and a ubuntu 20.04 machine configured with iSCSI client. If you want to know the iSCSI configuration., scroll to the end of the post.

The task is to use Terraform:

  • Create a iSCSI host definition in PowerStore for the ubuntu machine.
  • Create a block volume in PowerStore
  • Map the volume to the ubuntu machine via iSCSI

Interestingly, all this can be done via GUI. However, Terraforming the whole thing seems reasonable as it represents a typical end-to-end workflow for an administrator!

Let’s start!

Initial state of ubuntu machine

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 40.4M 1 loop /snap/snapd/20671
loop1 7:1 0 67.2M 1 loop /snap/lxd/21835
loop2 7:2 0 63.9M 1 loop /snap/core20/2105
loop3 7:3 0 40.9M 1 loop
loop4 7:4 0 63.5M 1 loop
loop5 7:5 0 91.9M 1 loop /snap/lxd/24061
loop6 7:6 0 63.9M 1 loop /snap/core20/2182
loop7 7:7 0 39.1M 1 loop /snap/snapd/21184
sda 8:0 0 32G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1.5G 0 part /boot
└─sda3 8:3 0 30.5G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 15.3G 0 lvm /
sr0 11:0 1 1.2G 0 rom

variables.tf

more variables.tf
variable "username" {
type = string
description = "Stores the username of PowerStore host."
default = "admin"
}

variable "password" {
type = string
description = "Stores the password of PowerStore host."
default = "XXXXXXX"
}

variable "timeout" {
type = string
description = "Stores the timeout of PowerStore host."
default = "120"
}

variable "endpoint" {
type = string
description = "Stores the endpoint of PowerStore host. eg: https://10.1.1.1/api/rest"
default = "https://10.204.109.70/api/rest"
}

main.tf

terraform {
required_providers {
powerstore = {
version = "1.1.0"
source = "registry.terraform.io/dell/powerstore"
}
}
}

provider "powerstore" {
username = var.username
password = var.password
endpoint = var.endpoint
insecure = true
timeout = var.timeout
}

resource "powerstore_host" "test" {
name = "my-ubuntu-2004-test-vm"
os_type = "Linux"
description = "Creating host"
host_connectivity = "Local_Only"
initiators = [{ port_name = "iqn.1993-08.org.debian:01:my-ubuntu-2004-test-vm"}]
}

resource "time_sleep" "wait_for_host" {
create_duration = "20s"
depends_on = [powerstore_host.test]
}

resource "powerstore_volume" "volumes" {
depends_on = [time_sleep.wait_for_host]
count = 1
name = "dell-iac-test-${count.index}"
size = 8
capacity_unit = "GB"
host_id = powerstore_host.test.id
}

output "hostResult" {
value = powerstore_host.test.id
}

output "volumeid" {
value = [for volume in powerstore_volume.volumes : volume.id]
description = "Powerstore Volume ID"
}

Let’s execute the terraform templates

terraform init && terraform plan && terraform apply -auto-approve

Initializing the backend...

Initializing provider plugins...
- Finding dell/powerstore versions matching "1.1.0"...
- Finding latest version of hashicorp/time...
- Installing dell/powerstore v1.1.0...
- Installed dell/powerstore v1.1.0 (signed by a HashiCorp partner, key ID 3D55C2542D9DE477)
- Installing hashicorp/time v0.11.1...
- Installed hashicorp/time v0.11.1 (signed by HashiCorp)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# powerstore_host.test will be created
+ resource "powerstore_host" "test" {
+ description = "Creating host"
+ host_connectivity = "Local_Only"
+ host_group_id = (known after apply)
+ id = (known after apply)
+ initiators = (sensitive value)
+ name = "my-ubuntu-2004-test-vm"
+ os_type = "Linux"
}

# powerstore_volume.volumes[0] will be created
+ resource "powerstore_volume" "volumes" {
+ app_type = (known after apply)
+ app_type_other = (known after apply)
+ appliance_id = (known after apply)
+ capacity_unit = "GB"
+ creation_timestamp = (known after apply)
+ description = (known after apply)
+ host_group_id = (known after apply)
+ host_id = (known after apply)
+ id = (known after apply)
+ is_replication_destination = (known after apply)
+ logical_unit_number = (known after apply)
+ logical_used = (known after apply)
+ name = "dell-iac-test-0"
+ nguid = (known after apply)
+ node_affinity = (known after apply)
+ nsid = (known after apply)
+ performance_policy_id = "default_medium"
+ protection_policy_id = (known after apply)
+ sector_size = 512
+ size = 8
+ state = (known after apply)
+ type = (known after apply)
+ volume_group_id = (known after apply)
+ wwn = (known after apply)
}

# time_sleep.wait_for_host will be created
+ resource "time_sleep" "wait_for_host" {
+ create_duration = "20s"
+ id = (known after apply)
}

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ hostResult = (known after apply)
+ volumeid = [
+ (known after apply),
]

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# powerstore_host.test will be created
+ resource "powerstore_host" "test" {
+ description = "Creating host"
+ host_connectivity = "Local_Only"
+ host_group_id = (known after apply)
+ id = (known after apply)
+ initiators = (sensitive value)
+ name = "my-ubuntu-2004-test-vm"
+ os_type = "Linux"
}

# powerstore_volume.volumes[0] will be created
+ resource "powerstore_volume" "volumes" {
+ app_type = (known after apply)
+ app_type_other = (known after apply)
+ appliance_id = (known after apply)
+ capacity_unit = "GB"
+ creation_timestamp = (known after apply)
+ description = (known after apply)
+ host_group_id = (known after apply)
+ host_id = (known after apply)
+ id = (known after apply)
+ is_replication_destination = (known after apply)
+ logical_unit_number = (known after apply)
+ logical_used = (known after apply)
+ name = "dell-iac-test-0"
+ nguid = (known after apply)
+ node_affinity = (known after apply)
+ nsid = (known after apply)
+ performance_policy_id = "default_medium"
+ protection_policy_id = (known after apply)
+ sector_size = 512
+ size = 8
+ state = (known after apply)
+ type = (known after apply)
+ volume_group_id = (known after apply)
+ wwn = (known after apply)
}

# time_sleep.wait_for_host will be created
+ resource "time_sleep" "wait_for_host" {
+ create_duration = "20s"
+ id = (known after apply)
}

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ hostResult = (known after apply)
+ volumeid = [
+ (known after apply),
]
powerstore_host.test: Creating...
powerstore_host.test: Creation complete after 0s [id=e3274545-17cf-4fae-8a4b-5b2ac2acc7b2]
time_sleep.wait_for_host: Creating...
time_sleep.wait_for_host: Still creating... [10s elapsed]
time_sleep.wait_for_host: Still creating... [20s elapsed]
time_sleep.wait_for_host: Creation complete after 20s [id=2024-03-12T07:51:44Z]
powerstore_volume.volumes[0]: Creating...
powerstore_volume.volumes[0]: Creation complete after 1s [id=b3cf9000-f436-4346-8d33-bae673358457]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

hostResult = "e3274545-17cf-4fae-8a4b-5b2ac2acc7b2"
volumeid = [
"b3cf9000-f436-4346-8d33-bae673358457",
]

As one can see the iSCSI host entry with the id and the corresponding volume is displayed as outputs.

Let’s see these entries in the PowerStore Manager GUI. Observe the initiator and the mapped volume.

Now let’s observe if we see the storage device in our ubuntu machine

echo 1 > /sys/block/sdb/device/rescan

OR

for host in /sys/class/scsi_host/*; do echo "- - -" | sudo tee $host/scan; ls /dev/sd* ; done

AND THEN

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 40.4M 1 loop /snap/snapd/20671
loop1 7:1 0 67.2M 1 loop /snap/lxd/21835
loop2 7:2 0 63.9M 1 loop /snap/core20/2105
loop3 7:3 0 40.9M 1 loop
loop4 7:4 0 63.5M 1 loop
loop5 7:5 0 91.9M 1 loop /snap/lxd/24061
loop6 7:6 0 63.9M 1 loop /snap/core20/2182
loop7 7:7 0 39.1M 1 loop /snap/snapd/21184
sda 8:0 0 32G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1.5G 0 part /boot
└─sda3 8:3 0 30.5G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 15.3G 0 lvm /
sdb 8:16 0 8G 0 disk
sr0 11:0 1 1.2G 0 rom

One can observe the /dev/sdb device with 8G. From here on, it’s the usual routine of creating filesystem and consuming the volume.

In conclusion, it is so easy to render these provisioning workflows via Terraform on Dell PowerStore,

cheers,

Ambar@thecloudgarage

#iwork4dell

Bash configuration for iSCSI client on my ubuntu machine

sudo su
apt update -y
systemctl enable --now iscsid
echo -e InitiatorName=iqn.1993-08.org.debian:01:my-ubuntu-2004-test-vm > /etc/iscsi/initiatorname.iscsi
systemctl restart iscsid open-iscsi

iscsiadm -m discovery -t sendtargets -p 10.204.110.17
<output-of-above>
10.204.110.19:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-ckm01221206067-a-628075ee
10.204.110.18:3260,1 iqn.2015-10.com.dell:dellemc-powerstore-ckm01221206067-b-365d6394

sudo iscsiadm -m node -T iqn.2015-10.com.dell:dellemc-powerstore-ckm01221206067-a-628075ee --login
Logging in to [iface: default, target: iqn.2015-10.com.dell:dellemc-powerstore-ckm01221206067-a-628075ee, portal: 10.204.110.19,3260] (multiple)
Login to [iface: default, target: iqn.2015-10.com.dell:dellemc-powerstore-ckm01221206067-a-628075ee, portal: 10.204.110.19,3260] successful.

sudo iscsiadm -m session -P 1
Target: iqn.2015-10.com.dell:dellemc-powerstore-ckm01221206067-a-628075ee (non-flash)
Current Portal: 10.204.110.19:3260,1
Persistent Portal: 10.204.110.19:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1993-08.org.debian:01:my-ubuntu-2004-test-vm
Iface IPaddress: 10.204.111.33
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 9
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE

--

--

Ambar Hassani

24+ years of blended experience of technology & people leadership, startup management and disruptive acceleration/adoption of next-gen technologies