Oracle Cloud Infrastructure and provisioning Exadata part 1

One of the biggest reasons we started to use Oracle Cloud Infrastructure was that we could get Exadata provisioned for our use. We already had on-premise setup but to get similar setup in the cloud was important.

I’m going to write a two part series on provisioning Exadata. On this first part I’ll discuss on the actual provisioning it under our account and what were the pre-requirements. On part 2 I’ll drill down on the actual Exadata setup and operational things.

What you need

If you have similar requirement and are looking to deploy it one of the first things you need to do is raise a service limit for your tenancy. This actually took some time with support to get it sorted but eventually the service limit was raised!

You also need to get confirmation from Oracle on which Availability Domain the Exadata will be provisioned from. So depending on the availability it can be AD1, AD2 or AD3 and remember these different with customers so AD1 for me isn’t necessarily same as AD1 for you!

Setup compartments and networks

You also should setup a compartment where you will deploy your resources and VCN subnets for Exadata. You will need two subnets in your VCN, one client subnet for hosts and listeners and the client data. Second subnet is for backups so it will handle backup traffic.

Really important that you allow all traffic to flow within these subnets including ICMP traffic, otherwise the Exadata provisioning will not start! I had to modify our subnet securitylists few times. First I hadn’t allowed all TCP traffic so we got an error and second time was when it required ICMP traffic to be allowed. This is clearly said in the documentation as well but I had forgotten it.

As we provision networks through Terraform the modification was quite simple. However as we use modules for securitylists and there is no easy way to pass multiple rules we made a decision to allow all traffic within subnets and make this our OCI setup default.

This is example of our ingress rules in Terraform module – see the rude hack as a second rule:

ingress_security_rules = [{


protocol="${var.ingress_protocol}"// tcp = 6

source="${var.ingress_source}"

stateless="${var.ingress_stateless}"

tcp_options{

// These values correspond to the destination port range.

"min"="${var.ingress_tcp_min}"

"max"="${var.ingress_tcp_max}"

}

},

{


protocol="${var.vcn_ingress_protocol}"// tcp = 6

source="${var.vcn_cidr_block}"// open all ports for VCN CIDR and do not block subnet traffic - this needs to be handled better in the future

stateless="${var.ingress_stateless}"


}]

Provisioning through Terraform

Even though provisioning Exadata is a one time thing for us I made a decision right in the start that this would be handled through Terraform as well. As we made the decision to do everything with Infrastructure as Code we shouldn’t slip from that even on one time resources.

Key things why I think using Terraform is good:

  • We have Exadata setup stored in our version control system, this is also documented at the same time so each variable is explained.
  • While some changes would require Exadata to be provisioned again changing the name or core count for example can be done online. This way we can do it via Terraform and not manually.
  • If in the future we would need to provision new Exadata for whatever reason we can follow same process

When you provision any DB system or Exadata you need to provide lot of variables. I won’t go through them all but you can see them quite well from OCI provider documentation from here.

Some variables are important to notice. Obviously you have to have decision on the shape. The X7 shapes are named with 2.XX so for example X7 1/4 Exadata is defined as Exadata.Quarter2.92. You will also need to provide the initial core count which you can change later on dynamically.

Database system edition for Exadata is always Enterprise Edition – Extreme Performance, also the disk redundancy is set to HIGH. While I understand the decision why they want it to be HIGH it would be great to have it as optional for customers to get more available storage in case we don’t run mission critical workloads.

Also remember to check which licensing model you are using, either bring your own license (BYOL) or license included. These reflect on the overall cost.

Once we had the parameters defined we had normal folder setup with Terraform. Template folder which has three files:

  • main.tf – actual code calling database module to create Exadata
  • variables.tf – custom variables
  • remote.tf – to store the remote state in object storage

Remote.tf is new file which we adapted in our setup quickly as we want to store Terraform state in OCI Object storage so it’s not handled locally.

Terraform module which creates Exadata is almost similar as you have on the OCI provider example. We just took some variables like DB workload, PDB etc off which we don’t require.

Running Terraform

I started the Terraform execution after running the usual process we have – terraform fmt/get/init/plan/apply once I had verified everything looks good. I had heard from Oracle that the provisioning can take from two hours to anything up to eight hours.

The time it took was actually in the middle!

module.CreateExadata.oci_database_db_system.CreateDBSystem: 
Creation complete after 3h54m4s

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

In the end provisioning it doesn’t differ for any other resource in the cloud.

What did we get

Looking on the provisioned system you get two database nodes which are VM’s on top of the real Exadata. So similar as with Cloud@Customer model. The nodes and SCAN name are provisioned with IP addresses from the subnets you provided and also with hostnames based on your variables.

I can login to both of the nodes with ssh key I provided during installation using the default opc user. This user can then use sudo to access root or the oracle user.

Changing configuration

Because of using Terraform change in the setup can be done easily and changes are tracked in your version control system. The possibility to adjust running number of cores is the one we plan to utilize to save with costs in the cloud. During daytime we plan to run with higher number of cores compared to night time where we will run as few as two cores.

Adjusting this is as simple as changing variable core_count in our variables.tf and then running Terraform plan/apply.

module.CreateExadata.oci_database_db_system.CreateDBSystem: 
Modifications complete after 4m46s

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Not too bad, it only took five minutes. It’s still unclear how we will schedule this when we move further and is there any impact on our operations when we change it.

Summary

Provisioning Exadata is as simple as any other DB system in OCI. Obviously you need to plan every resource and read the documentation so you don’t miss any requirements.

While getting the resource from cloud is nice I would still encourage you to search what benefits it will bring if you run it from the cloud. Think on dynamic scaling and Infrastructure as Code if nothing else!

Overall the provisioning part was easy and it gave us possibility to move forward quickly.

Stay tuned for part 2 on the actual usage of Exadata later on.

Simo

View Comments

  • omg... terraform to provision exadatas on oci...
    thanks for the explanation, but I think that the number of exadatas that we need to provision is two... or three? for me, have no sense terraform that ;)

    • Well I think the more you need to deploy the better is actually use Terraform to manage them. That way you can deploy them in a similar manner and change the necessary configuration etc via variables. And you get to keep configuration in one place! But of course it depends on the overall approach you want to take, it won't everyone for sure!

Recent Posts

Connecting to Autonomous Database Running on Google Cloud

Last time I showed how to provision Autonomous Database Serverless (ADB-S) on Google Cloud. This…

1 month ago

Can you believe it? Provisioning Autonomous Database in GCP!

I bet few years back folks didn't expect that by 2024 we would be able…

2 months ago

IP Address Insights with CLI

My previous post on IP Address Insights I mentioned it wasn't yet available with CLI…

6 months ago

Thoughts on Oracle Database@Azure

This will NOT be a technical walkthrough on Oracle Database@Azure but rather my opinions and…

6 months ago

OCI Vulnerability Scanning Setup

Many times when you work for someone, they already have their own vulnerability scanning throughout…

6 months ago

OCI IP Address Insights

Recently OCI announced small but VERY useful service, IP Address Insights. Why this matters? I've…

6 months ago