OCI block volumes and Terraform

So we already know how to spin up instances with and without Terraform but what about when you need to attach a block volume to your fresh instance which you’ve created?

You have an option to login to console and just add either paravirtualized or iscsi volume to your compute instance. If you add paravirtualized volume it will show up immediately but if you add an iSCSI volume you will need to run additional commands so it will be visible in your instance.

Depending on your use case remember that paravirtualized volumes will/might have worse performance compared to iSCSI volume.

On this post I’ll show how to do it via console and main topic will be on how to create block volume and attach it to instance with Terraform. In the end I will mention some cases how you could continue with automation and one problem I noticed when using OCI.

You can also do this via oci-cli but I will not go through that on this post.

Console

Just to show from console you can create and attach the block volumes to an instance.

Create block volume
Attach block volume

Terraform

My initial idea on creating volume and volume attachment to an instance was to do this via two separate modules. However I ran into issue which forced me to do this in one module.

If I use a separate module for block volume attachment and create more than one volume the problem was Terraform doesn’t know amount of variables in the passed list to volume attachment module and throws an error. Easy way to get around this was to combine these two modules. Usually you anyway attach volume to instance when you create it.

If you have a requirement to create block volumes without attachments then you could potentially create a new module for it so you use two different volumes.

Terraform code calling modules

In the actual main.tf on creating the instance and volume I have created the instance first by calling create instance module. I’ve shown that in previous blog post so I will skip that and go straight on creating the volume.

This is what it looks like:

module "CreateVolume" {

source="../block_volume"

volume_count="${var.volume_count}"

tenancy_ocid="${var.tenancy_ocid}"

compartment_ocid="${lookup(data.oci_identity_compartments.GetCompartments.compartments[0],"id")}"

volume_availability_domain="${lookup(data.oci_core_subnets.GetPublicSubnet.subnets[0],"availability_domain")}"

volume_display_name= ["${var.volume_display_name}"]

volume_size_in_gbs= ["${var.volume_size_in_gbs}"]

instance_id="${module.CreatePublicInstance.instanceId[0]}"

volume_attachment_type= ["${var.volume_attachment_type}"]

}

Let’s break this down.

In the start I use static variable volume_count to define how many volumes I will make.

I’ve used data sources to get compartment ocid and the public subnet availability domain. Remember that you create the block volume always on the same availability domain as your instance resides as it will be attached to your instance so one way of remembering that is that it needs to be physically close to it.

Next I pass two variables as lists to the module, that’s the reason they are closed with []. If I look how they are defined in variables.tf they look like this:

variable “volume_display_name” {type = “list” default = [“MyVolume1″,”MyVolume2”]}

variable “volume_size_in_gbs” {type =”list” default = [“50″,”60”]}

I want to make modules reusable so you can pass as many variables as needed via lists so if there are requirement to create for example two volumes it can be done by calling the module once.

Next I will pass the instance ocid from the create instance module and as I’ve created only one instance I use [0] to identify the correct ocid. If I would have two instances I would probably call this module twice with [0] and [1] in each respectively.

And finally I pass the volume attachment type as defined via list. For these I use paravirtualized as type.

Terraform module code

The actual module looks like this:

variable "tenancy_ocid" {}

variable "compartment_ocid" {}

variable "volume_availability_domain" {}

variable "volume_display_name" {type= "list"}

variable "volume_size_in_gbs" {type = "list"}

variable "volume_count" {}

variable "instance_id" {}

variable "volume_attachment_type" {type = "list"}


resource "oci_core_volume" "CreateVolume" {

    count="${var.volume_count}"

    availability_domain = "${var.volume_availability_domain}"

    compartment_id = "${var.compartment_ocid}"

    display_name = “${var.volume_display_name[count.index]}”
    size_in_gbs = “${var.volume_size_in_gbs[count.index]}”
}

resource "oci_core_volume_attachment" "CreateVolumeAttachment" {

   
    count="${var.volume_count}"

    attachment_type = "${var.volume_attachment_type[count.index]}"

    instance_id = "${var.instance_id}"

    volume_id = "${oci_core_volume.CreateVolume.*.id[count.index]}"

}

Few pointers from this. When I use count inside resource it will create that many resources as the count variable has. That’s why some of the variables are passed as list and then defined as [count.index] so it takes correct value from it.

You can also notice reference of the previous oci_core_volume resource and usage of splat character there. Usage of splat is briefly mentioned here.

Running Terraform

Now when I run terraform init (note that OCI is now official provider so you don’t need to define provider anymore but just load it via init) and terraform apply it will create five resources, one instance, two block volumes and two volume attachments.

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Creating instance and two volumes sized 50GB and 60GB took around 1 minute and I can see they are successfully attached to the instance.

Created & attached block volumes via Terraform

How to go from here?

What I would to do next is to automate also creation of disks on the operating system side and mounting them automatically. While I was experimenting with this I noticed that you can’t guarantee what assignment each volume will get. The root volume can be sda, sdb etc and the created block volumes can also be in any order.

Even in above the disk assignments are like this:

Disk /dev/sda: 64.4 GB, 64424509440 bytes, 125829120 sectors  <– 60GB volume

Disk /dev/sdb: 50.0 GB, 50010783744 bytes, 97677312 sectors <– Root volume

Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors <– 50GB volume

This results that scripting to automate provisioning further is more complicated than expected. With AWS provider you can define the EBS volume assignment so it will be easier.

This could be also use case for Ansible or looking if you can use cloud-init script in Terraform. I haven’t tested yet how cloud-init would work in this case.

I really fancy the module approach with Terraform and specially when there are more and more requirements to build different components we can utilize same code base for all. Definitely use Terraform right from the start with OCI!

 

Simo

View Comments

Recent Posts

Connecting to Autonomous Database Running on Google Cloud

Last time I showed how to provision Autonomous Database Serverless (ADB-S) on Google Cloud. This…

1 month ago

Can you believe it? Provisioning Autonomous Database in GCP!

I bet few years back folks didn't expect that by 2024 we would be able…

1 month ago

IP Address Insights with CLI

My previous post on IP Address Insights I mentioned it wasn't yet available with CLI…

6 months ago

Thoughts on Oracle Database@Azure

This will NOT be a technical walkthrough on Oracle Database@Azure but rather my opinions and…

6 months ago

OCI Vulnerability Scanning Setup

Many times when you work for someone, they already have their own vulnerability scanning throughout…

6 months ago

OCI IP Address Insights

Recently OCI announced small but VERY useful service, IP Address Insights. Why this matters? I've…

6 months ago