Remote Cloning OCI Database PDB with OCI Resource Manager, Terraform and Ansible

Code referenced in this post can be found from my Github page.

There’s a new feature in OCI, which enables you to remote clone PDB from one database to another if certain conditions match. Right now, the databases need to be in same Availability Domain, need to be in same version and use the same edition. You can clone between Bare metal and VM DBCS, it doesn’t seem to be possible with ExaCS.

When I was thinking what would be good way to do this, I right away wanted to do this with OCI Resource Manager. I feel like there are few advantages for this approach. Resource Manager host supports Terraform and Ansible, you don’t need to install anything on any server to perform the cloning. Also, cloning is controlled by OCI IAM policies, if you grant proper access and access to run Resource Manager stacks, you can easily control this through IAM.

Why Ansible?

First choice was to use purely Terraform, however, when I started to write this out, it occurred to me that you might want to clone over existing PDB and not always use a new name. Since Terraform state for initial PDB might not be available, I started to look on how to do this with Ansible. I feel Ansible is much more capable of doing these type of operational activities and my future version could potentially do the cloning with Ansible as well.

I just couldn’t get the remote cloning playbook to work yet.. I mean it works.. But doesn’t do anything yet!

OCI Ansible Collections are available to view from here. To delete, I used following collection:

oracle.coi.oci_database_pluggable_database

Setup

What the code actually does is pretty simple, deletes existing PDB if it exists in the target and clones the specified PDB to target. Ansible and Terraform work smoothly together, so I can make the remote clone wait until delete has completed.

To get necessary existing data, there needs to be calls to multiple data sources from Terraform. What needs to be specified in the variables are names for compartments, source and destination databases and few passwords.

To get necessary information from data sources, we need to go level by level.. first get compartment, then DB systems, then DB homes from DB systems, then databases from DB homes and finally PDB’s from databases.

Few odd tweaks also, for null_resource where we delete existing PDB, I’ve set a trigger to force the execution always. Similarly, for actual remote clone resource I have depends_on defined against the null resource so we wait until delete is complete.

Running the Stack in OCI Resource Manager

I first need to create the Stack, upload all the files, including the YAML file and give the stack a name. At this point, you could adjust the variables also if needed. For example, defining passwords at this stage could be good.

Once stack is created, we can run Plan and then Apply. There’s also the drawback of using Terraform, it creates state for the remote clone operation but once it’s done.. There isn’t anything really created for the state? New pluggable database on remote destination isn’t logged in the state file, which is good. This means that after running this stack, I can just run Terraform destroy it and it will not touch on the pluggable database.

Any subsequent executions, you will always need to run destroy first so Terraform doesn’t let you know that there are no updates to state. Creation of the pluggable database clone takes just few seconds, but in the background creating the pluggable database will take some time until it’s copied over.

If destination has pluggable database with same name, you will see Ansible kicking in and deleting the existing pluggable database, if there is no pluggable DB existing, it will just skip that task.

I’ve run this stack now two times, initially there wasn’t any PDB with matching name so it got created. After that I ran Terraform destroy to delete state of the cloning action (remember this keeps the PDB) and then I re-ran the apply so it deleted the first PDB which was cloned.

Summary

Terraform and Ansible through OCI Resource Manager provide good tools to provision your infrastructure, and like in this case.. do operational activities as well. Right now, the OCI Resource Manager requires public endpoints if you try to access instances but for this type of activity, where you only call the API’s, it’s super easy and useful.

Official Oracle blog post on using Ansible with OCI Resource Manager was excellent for learning how it works, similarly Oracle’s Github examples on using Ansible with OCI provided good information.

If you ever end up using this piece of code for cloning, let me know how it worked for you!

Simo

View Comments

  • Regarding your need to run a Terraform Destroy before any subsequent apply, I did not tested it but maybe using `ignore_changes` meta-argument you would be able to skip this (cubersome) step?

    e.g: add this block to the resource

    ```HCL
    resource xyz "abc" {
    ...
    lifecycle { ignore_changes = all }
    }
    ```

Recent Posts

Connecting to Autonomous Database Running on Google Cloud

Last time I showed how to provision Autonomous Database Serverless (ADB-S) on Google Cloud. This…

1 month ago

Can you believe it? Provisioning Autonomous Database in GCP!

I bet few years back folks didn't expect that by 2024 we would be able…

1 month ago

IP Address Insights with CLI

My previous post on IP Address Insights I mentioned it wasn't yet available with CLI…

6 months ago

Thoughts on Oracle Database@Azure

This will NOT be a technical walkthrough on Oracle Database@Azure but rather my opinions and…

6 months ago

OCI Vulnerability Scanning Setup

Many times when you work for someone, they already have their own vulnerability scanning throughout…

6 months ago

OCI IP Address Insights

Recently OCI announced small but VERY useful service, IP Address Insights. Why this matters? I've…

6 months ago