Well this took a while! With recent work and travel it’s been couple busy months. Excited to write on this after being at Google Next 25 last week! I really feel the Oracle partnership with Google, Azure and AWS is going to be awesome for folks who still leverage Oracle technology heavily but want to also use best services available from each cloud.
On part 1, I was explaining the network setup between OCI and GCP so we can have the connectivity between my source and destination databases as well as ZDM migration server.
Quick recap: My source DB is Oracle Base DB running 23ai on OCI in a private subnet, the destination database is Autonomous DB 23ai running on GCP with a private endpoint.
In part 1, I did establish IPSec VPN connectivity between clouds and made sure you can resolve source and target endpoints via DNS from both sides.
You could use also interconnect for faster connectivity but real world scenario is probably that the source DB exists in your own datacenter. Still, same stuff with DNS and connectivity will apply!
Before we get started, I think GCP networking is awesome together with OCI! Much fun to work with it in this test.
I have installed ZDM server with version 21.5 on the OCI side, I won’t go into details with ZDM installation but there are pre-requisites for setup and connectivity that you need to do from documentation here.
The next thing to setup is the ZDM template, I have picked the ZDM logical offline template as a starting point and then started to narrow down the variables I need.
Remember logical offline is basically Oracle Data Pump migration, ZDM supports all the Data Pump parameters you have but it will take bit of time to figure out how you will use them.
For source DB, I have installed the standard Oracle HR schema which I will migrate over to GCP ADB.
One thing I would recommend is to setup SQL client on the ZDM server, it’ll be good way to validate you can connect to source and DB database and (most likely) connectivity is setup properly. I’ve used SQLcl client on ZDM server to test connectivity.
Just an example how it will look like for GCP connection, I have downloaded the wallet from my GCP ADB so I only have to remember my service name to connect.
SQLcl: Release 24.4 Production on Tue Apr 15 20:26:18 2025 Copyright (c) 1982, 2025, Oracle. All rights reserved. SQL> set CLOUDCONFIG /home/opc/Wallet_tfgadb.zip SQL> conn admin@tfgadb_high Password? (**********?) *************** Connected.
laalala
MIGRATION_METHOD=OFFLINE_LOGICAL DATA_TRANSFER_MEDIUM=DBLINK TARGETDATABASE_ADMINUSERNAME=admin SOURCEDATABASE_ADMINUSERNAME=SYSTEM TARGETDATABASE_OCID=ocid1.autonomousdatabase.oc1.iad.anuwcljruc6vv7yaiawotzdcun5a2lb7sqhifabxnynntmz7cwf3jaaaaaa TARGETDATABASE_CONNECTIONDETAILS_SERVICENAME= TARGETDATABASE_DBTYPE=ADBS SOURCEDATABASE_CONNECTIONDETAILS_HOST=source.sub11071602521.mydbvcn.oraclevcn.com SOURCEDATABASE_CONNECTIONDETAILS_PORT=1521 SOURCEDATABASE_CONNECTIONDETAILS_SERVICENAME=SOURCEDB_pdb1.sub11071602521.mydbvcn.oraclevcn.com SOURCEDATABASE_ENVIRONMENT_NAME=ORACLE SOURCEDATABASE_ENVIRONMENT_DBTYPE=ORACLE SOURCEDATABASE_ALLOWTEMPTABLE=TRUE DATAPUMPSETTINGS_JOBMODE=SCHEMA DATAPUMPSETTINGS_METADATAFIRST=FALSE DATAPUMPSETTINGS_SCHEMABATCH-1=HR DATAPUMPSETTINGS_EXPORTVERSION= DATAPUMPSETTINGS_FIXINVALIDOBJECTS=TRUE DATAPUMPSETTINGS_METADATAREMAPS-1=type:REMAP_TABLESPACE, oldValue:USERS, newValue:DATA DATAPUMPSETTINGS_METADATAREMAPS-2=type:REMAP_TABLESPACE, oldValue:DATA1, newValue:DATA DATAPUMPSETTINGS_CREATEAUTHTOKEN=TRUE DATAPUMPSETTINGS_USEAUTHTOKEN=FALSE DATAPUMPSETTINGS_METADATAFILTERS-1= DATAPUMPSETTINGS_MONITORINTERVALMINUTES=2 DATAPUMPSETTINGS_OMITENCRYPTIONCLAUSE=TRUE DATAPUMPSETTINGS_SECUREFILELOB=TRUE DATAPUMPSETTINGS_METADATATRANSFORMS-1= DATAPUMPSETTINGS_SKIPDEFAULTTRANSFORM=FALSE DATAPUMPSETTINGS_DATAPUMPPARAMETERS_TABLEEXISTSACTION=TRUNCATE #DATAPUMPSETTINGS_DATAPUMPPARAMETERS_EXCLUDETYPELIST=INDEX,CONSTRAINT,REF_CONSTRAINT DATAPUMPSETTINGS_DATAPUMPPARAMETERS_NOCLUSTER=FALSE DATAPUMPSETTINGS_DATAPUMPPARAMETERS_ESTIMATEBYSTATISTICS=FALSE DATAPUMPSETTINGS_DATAPUMPPARAMETERS_RETAININDEX=FALSE DATAPUMPSETTINGS_DATABASELINKDETAILS_NAME=ZDM DATAPUMPSETTINGS_RETAINDUMPS=FALSE EXCLUDEOBJECTS-1= RELOADOBJECTS-1= OCIAUTHENTICATIONDETAILS_REGIONID=us-ashburn-1 OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_TENANTID=ocid1.tenancy.oc1..aaaaaaaafrxrcj32smfsmpmk3e2f4ddddd OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_USERID=ocid1.user.oc1..aaaaaaaa2jgirx6rk5ojv553qncoj2nkqaom5aealkz2hisaaaddff OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_FINGERPRINT=c9:2f:8c:fa:16:17:7f:73:66:35:f0:72:ee:69:ff:5e OCIAUTHENTICATIONDETAILS_USERPRINCIPAL_PRIVATEKEYFILE=/home/opc/.ssh/simo.key DUMPTRANSFERDETAILS_PARALLELCOUNT=3 DUMPTRANSFERDETAILS_RETRYCOUNT=3 DUMPTRANSFERDETAILS_PUBLICREAD=FALSE DUMPTRANSFERDETAILS_RSYNCAVAILABLE=FALSE TABLESPACEDETAILS_USEBIGFILE=FALSE TABLESPACEDETAILS_EXTENDSIZEMB=512 TABLESPACEDETAILS_REMAPTARGET=DATA RUNCPATREMOTELY=FALSE COPYCPATREPORTTOZDMHOST=FALSE FORCECPATENUSLOCALE=FALSE PROFILE=NONE GENFIXUP=NO RUNFIXUPS=FALSE
Some important info from above.
Line 1: Migration method describes on if we use Data Pump, Golden Gate etc. In this case logical offline is Data Pump
Line 5: This is the GCP ADB OCID, you will have to login to OCI via the Manage in OCI link in GCP Console to find out the OCID (or use CLI)
Line 16: This defines the schemas I’m about to migrate, in this case it’s just HR
Line 17: IMPORTANT this was tricky one, if I set this to ANY value (12, latest, compatible) the import will fail on unsupported parameters. Leaving it empty works
Line 19-20: Just converting the source tablespaces to DATA which is required for Autonomous
Line 30: Example if you want to exclude objects, standard Data Pump functionality
Line 34: IMPORTANT you can pre-create the DB link by yourself and test it, OR let ZDM create the DB link. I did have to pre-create it myself (example below) on GCP ADB to get import working. I suspect it’s due to some DNS stuff it didn’t work through ZDM.
Line 39-42: This is the OCI connection details TO YOUR GCP linked OCI tenancy. You will have to find out some OCI information to do this successfully.
For the Database link I ran following statements to pre-create the link on the GCP ADB:
BEGIN DBMS_CLOUD.CREATE_CREDENTIAL( credential_name => 'ZDM_CRED', username => 'SYSTEM', password => 'That_Finnish_Guy_12' ); END; / BEGIN DBMS_CLOUD_ADMIN.CREATE_DATABASE_LINK( db_link_name => 'ZDM', hostname => 'source.sub11071602521.mydbvcn.oraclevcn.com', port => '1521', service_name => 'SOURCEDB_pdb1.sub11071602521.mydbvcn.oraclevcn.com', credential_name => 'ZDM_CRED', ssl_server_cert_dn => NULL, directory_name => NULL, private_target => TRUE ); END; /
You’ll need the credentials what the link will use first and then the actual link. NOTE the private_target must be set, otherwise the connection won’t work and ADB will try to connect towards public.
Once the setup was done, I did run the ZDM migration few times to get everything correctly. Mostly getting right variables for schemas, tablespaces as well as the EXPORTVERSION.
I probably advocate too much on ZDM but I can’t say enough how helpful the tool is, I agree that the initial setup is sometimes time taking but once you have it setup the automation and standardizing the migration is just great. I would think twice on setting up Golden Gate with ZDM vs doing it manually though, with GG maybe the overlay with ZDM could be too much?
[opc@zdm215 ~]$ $ZDMHOME/bin/zdmcli migrate database -rsp dblink_gcp.rsp -sourcenode source -sourcesid SOURCE -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/opc/.ssh/key.ppk -srcarg3 sudo_location:/usr/bin/sudo zdm215.sub11071602520.mydbvcn.oraclevcn.com: Audit ID: 81 Enter source database administrative user "SYSTEM" password: Enter target database administrative user "admin" password: Operation "zdmcli migrate database" scheduled with the job ID "25". [opc@zdm215 ~]$ $ZDMHOME/bin/zdmcli query job -jobid 25 zdm215.sub11071602520.mydbvcn.oraclevcn.com: Audit ID: 92 Job ID: 25 User: opc Client: zdm215 Job Type: "MIGRATE" Scheduled job command: "zdmcli migrate database -rsp dblink_gcp.rsp -sourcenode source -sourcesid SOURCE -srcauth zdmauth -srcarg1 user:opc -srcarg2 identity_file:/home/opc/.ssh/key.ppk -srcarg3 sudo_location:/usr/bin/sudo" Scheduled job execution start time: 2025-04-15T20:02:35Z. Equivalent local time: 2025-04-15 20:02:35 Current status: SUCCEEDED Result file path: "/opt/zdm/base/chkbase/scheduled/job-25-2025-04-15-20:02:36.log" Metrics file path: "/opt/zdm/base/chkbase/scheduled/job-25-2025-04-15-20:02:36.json" Excluded objects file path: "/opt/zdm/base/chkbase/scheduled/job-25-filtered-objects-2025-04-15T20:02:52.172.json" Job execution start time: 2025-04-15 20:02:36 Job execution end time: 2025-04-15 20:05:43 Job execution elapsed time: 3 minutes 7 seconds ZDM_VALIDATE_TGT ...................... COMPLETED ZDM_VALIDATE_SRC ...................... COMPLETED ZDM_SETUP_SRC ......................... COMPLETED ZDM_PRE_MIGRATION_ADVISOR ............. COMPLETED ZDM_VALIDATE_DATAPUMP_SETTINGS_SRC .... COMPLETED ZDM_VALIDATE_DATAPUMP_SETTINGS_TGT .... COMPLETED ZDM_PREPARE_DATAPUMP_SRC .............. COMPLETED ZDM_PREPARE_DATAPUMP_TGT .............. COMPLETED ZDM_PARALLEL_IMPORT_TGT ............... COMPLETED ZDM_POST_DATAPUMP_SRC ................. COMPLETED ZDM_POST_DATAPUMP_TGT ................. COMPLETED ZDM_REFRESH_MVIEW_TGT ................. COMPLETED ZDM_POST_ACTIONS ...................... COMPLETED ZDM_CLEANUP_SRC ....................... COMPLETED SQL> select username, created from dba_users order by 2 desc; USERNAME CREATED _________________________ ___________ HR 25-04-15 ...
Above you can see migration command as well as checking the status of the ZDM migration job and finally validating the HR schema is present now in my GCP ADB.
There’s maybe three things I want to highlight on this:
First, you will have to understand OCI side always to setup connectivity, troubleshoot etc. There’s no way to avoid logging to OCI Console and working there! Which isn’t necessarily a bad thing.
Second, once you have everything setup, the migration is as easy or hard as to any OCI database. It’s exactly same experience which is great work from Google and Oracle. Yay!
Third, the whole point on running Oracle Database@Google is to be able to leverage Oracle Database features in GCP AND have your data close to GCP services. Now you’re able to leverage that critical data with any GCP service you want to link it with which is the benefit of the multicloud solution!
Just saw that OCI has enabled preview for new OCI Console experience. To enable it,…
This will be a weird and fun post. I have recently been working with Autonomous…
I recently came across requirement to get OCI Oracle Autonomous Database audit logs to OCI…
Last time I showed how to provision Autonomous Database Serverless (ADB-S) on Google Cloud. This…
I bet few years back folks didn't expect that by 2024 we would be able…